title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency
Accept (poster)
Summary: This paper proposes MucRays, which imposes ray-based neural functions with multi-view geometry consistency. This framework contains three-parts: ray-surface distance field, dual-ray visibility classifier, and multi-view consistency optimization strategy. Quantitative results of MucRays surpass the existing coordinate- or ray- based networks for synthetic or real-world scene modeling. This ray-based method achieves 1000x faster for depth image inference. Strengths: 1. The dual-ray visibility classifier is interesting. It is well-motivated and acts as the foundation of the multi-view consistency optimization. 2. The quantitative results are good. The high performance of MucRays on DAE metric proves its ability to render more accurate depth images. Weaknesses: 1. The qualitative results does not satisfy me. It seems that, MucRays has difficulty in representing thin structures. For example, in the Reception scene in Figure 4 (Appendix), the arm of the desk lamp is missing in both distance map and mesh. I would like to see a discussion about such phenomenon. 2. I think NeuS (NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction) is a better baseline than DeepSDF because it could use both depth and color for supervision. 3. I assume that this paper focused on better geometry modeling. However, it seems that the geometric result (mesh result) on Lego (Figure 5) is over smoothed compared with DS-NeRF or NDF. Is it caused by TSDF or the nature of ray distance representation itself? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Main questions are in Weakness part. More detailed comments: The authors should use consistent abbreviation. For example, L. 80 use UDF for paper [13], while the remaining contents use NDF for [13]. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address the main concerns below. **Q1: The qualitative results does not satisfy me. It seems that, MucRays has difficulty in representing thin structures. For example, in the Reception scene in Figure 4 (Appendix), the arm of the desk lamp is missing in both distance map and mesh. I would like to see a discussion about such phenomenon.** **A1:** Unarguably, recovering thin structures is particularly challenging for almost all methods, including ours and baselines. For our method, there could be multiple potential factors that lead to difficulty in reconstructing thin structures. In our multi-view consistency loss function $\ell_{mv}$ (Eq. 3), we simply change it from $\ell_1$ to $\ell_2$ optimization, and conduct an additional experiment on *Reception* of DMSR dataset. The following Table and Figure 8 in the submission file show the results. We can see that $\ell_2$ based loss function can better reconstruct thin structures but with more noisy results at the same time. In this regard, it is of great interest to fully address this issue and we leave it for future exploration. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | MucRays-$\ell_2$|**6.56** | 12.141 / 6.853 | | **MucRays-$\ell_1$** | 6.96| **10.632** / **5.714**| **Q2: I think NeuS (NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction) is a better baseline than DeepSDF because it could use both depth and color for supervision.** **A2:** Thank you for this suggestion and we conduct additional experiments for NeuS on the Blender dataset. From the following Table, our method is clearly better than NeuS in both reconstruction accuracy and novel view rendering speed. Figure 9 in the submission file shows the qualitative results. We will add this new baseline into the main paper in next version. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | PSNR $\uparrow$| SSIM $\uparrow$ | LPIPS $\downarrow$ | Rendering time (seconds) |--------------------|:---------:|:-----------------:|:---:|:---:|:---:|:---:| | NeuS| 12.10 | 4.662 / 0.938 | **27.19** | **0.911** | **0.100** | 32.793 | | **MucRays** | **8.17** | **3.295** / **0.755**| 26.52 | 0.910 | 0.099 | **0.019**| **Q3: I assume that this paper focused on better geometry modeling. However, it seems that the geometric result (mesh result) on Lego (Figure 5) is over smoothed compared with DS-NeRF or NDF. Is it caused by TSDF or the nature of ray distance representation itself?** **A3:** This is a very interesting question. The observed over-smoothing effects may be caused by multiple potential factors, including the choice of ray parameterizations, the choice of loss functions, the postprocessing of TSDF, etc. Additionally, the nature of continuous ray-distance functions (MLPs) is also likely to result in smoothing predictions given similar input rays. In this regard, we hope that our method could inspire more advanced research works to tackle these core challenges in the future. **Q4: The authors should use consistent abbreviation. For example, L. 80 use UDF for paper [13], while the remaining contents use NDF for [13].** **A4:** Thanks. We will fix it in the next version. --- Rebuttal Comment 1.1: Title: Reply Comment: The rebuttal has solved my concerns and I maintain my initial rating. --- Reply to Comment 1.1.1: Title: Thanks Comment: We thank the reviewer's time to read our rebuttal materials and the encouraging rating. -Authors.
Summary: This paper presents a new strategy to design ray-based neural representation of 3D shapes. Ray-based approaches towards shape representation is a recently emerging idea that bypasses extensive point-based evaluation required by conventional methods like signed distance function. A key missing component in existing ray-based neural representations is the lack of multi-view consistency. This paper proposes training a dual-ray visibility classifier and using that network's output to guide the training of the network for ray-distance prediction. The method is extensively evaluated on synthetic and real-world datasets, showing its superiority over previous ray-based methods in predicting ray hit points from novel views. Moreover, this paper also includes evaluations of color prediction of this ray-based method, showing it achieving comparable performance to previous efficient representations. Strengths: - Addresses a key issue of multi-view inconsistency in current ray-based scene representations. - Clear writing and figure presentation. - Extensive evaluation showing the efficacy of the proposed strategy, outperforming previous baselines without slowing down inference Weaknesses: - The proposed method operates under the setting where depth maps are available for all views, which is okay, but the paper would be more complete if it discusses how bad the results would become if depth is unknown, which is actually the setting of LFN. - Not totally clear how the baselines are implemented. For example, when using LFN and PRIF, the supplementary material mentions that it uses the same architecture as the official setup. However, it is not clear whether the LFN/PRIF results are obtained still with the proposed classifier. At least based on the writing, it does not eliminate the ambiguity. - The proposed method also adopts a different ray parametrization than LFN and PRIF. It is not clear what the reason is behind this design choice. - The proposed method is implemented as a 13-layer SIREN with 1024 hidden channels, which is different than previous baselines. The paper does not include any analysis on the impact of varying layer depth and hidden channels. - In L123, the paper says the network predictions "must satisfy a transformation equation", without defining what this equation is. - Related work misses any discussion on space carving / visual hull (e.g., [Matusik 2000], [Kutulakos 2000]), which are important early concepts behind the idea of visibility checking proposed in this work. Related work also misses some recent papers on light field / ray-based representation (e.g. HyperReel [CVPR 2023], SRT [CVPR 2022], SIGNET [ICCV 2021]). - L237 typo: "RTX 30390 GPU" Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In Table 5, the performance drops significantly after removing the classifier, and it actually becomes much worse than the various baselines in Table 2. This is confusing and relates to the above-mentioned weakness about "Not totally clear how the baselines are implemented". Do the baseline methods included in Table 2, specifically LFN and PRIF, involve a classifier? That would totally change how to correctly interpret the results in Table 5. - If the baseline methods do not involve a classifier, and they are only just about a different ray parametrization with slightly smaller network, then the natural question becomes: how does LFN/PRIF + classifier perform? - In general, how does the ray parametrization affect the performance? It appears that it would have minor impact compared to the role of classifier, but the paper does not provide enough information to draw any conclusion. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address the main concerns below. **Q1: The proposed method operates under the setting where depth maps are available for all views, which is okay, but the paper would be more complete if it discusses how bad the results would become if depth is unknown, which is actually the setting of LFN.** **A1:** We thank the reviewer for this suggestion. Our method indeed operates with the requirement of depth supervision. Nevertheless, with the advancement of existing techniques of depth estimation from RGB images, it is quite feasible to obtain sparse depth signals using existing techniques such as SfM or learning-based monocular depth estimators. In this regard, we additionally provide experimental results of our method using sparse depth values in supervision. From the following Table which reports the metrics on *Lego* in Blender dataset, we can see that our method still achieves satisfactory performance even though there are only 1\% depth values in training. We hypothesize that such robustness comes from our simple multiview consistency constraint, because many depth values in the training set may be redundant thanks to our effective classifier. Figure 4 in the submission file shows the qualitative results. We will include these results in the next version. |Sparsity of depth supervision| DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | 1% | 8.69 | 1.517 / 0.937 | | 5% | 8.55 | 1.522 / 0.928 | | 10% | 8.06 | 1.503 / 0.952 | | **100% (MucRays)** | **7.98**| **1.095** / **0.702**| **Q2: Not totally clear how the baselines are implemented. For example, when using LFN and PRIF, the supplementary material mentions that it uses the same architecture as the official setup. However, it is not clear whether the LFN/PRIF results are obtained still with the proposed classifier. At least based on the writing, it does not eliminate the ambiguity.** **A2:** The LFN/PRIF results are obtained without our classifier. Here, we further conduct experiments on LFN/PRIF by using our own classifier for comparison. As shown in the following Table, our method still achieves the best performance. Figure 1 in the submission file shows the qualitative results. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | LFN + Our Visibility Classifier | 90.82 | 121.289 / 44.564 | | PRIF + Our Visibility Classifier | 9.97 | 4.187 / 0.891 | | **MucRays** | **7.97**| **3.388** / **0.663** | **Q3: The proposed method also adopts a different ray parametrization than LFN and PRIF. It is not clear what the reason is behind this design choice.** **A3:** In fact, our pipeline is amenable to different types of ray parameterizations. To verify this, we conduct additional ablation studies by replacing our spherical coordinate with that used in LFN and PRIF. The following Table shows the results on the Blender dataset. We can see that PRIF parameterization can also achieve satisfactory performance, while LFN parameterization is inferior in our pipeline. Figure 1 in the submission file shows qualitative results. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | LFN param. + (Ray-surface Distance Network + Visibility Classifier) | 89.53 | 82.102 / 10.298 | | PRIF param. + (Ray-surface Distance Network + Visibility Classifier) | 8.42 | 3.409 / **0.621** | | (LFN param. + Ray-surface Distance Network) + (Our param. + Visibility Classifier)| 91.30 | 142.570 / 74.345 | | (PRIF param. + Ray-surface Distance Network) + (Our param. + Visibility Classifier) | 8.44 | 3.526 / 0.653 | | **MucRays** | **7.97**| **3.388** / 0.663| **Q4: The proposed method is implemented as a 13-layer SIREN with 1024 hidden channels, which is different than previous baselines. The paper does not include any analysis on the impact of varying layer depth and hidden channels.** **A4:** Thanks for the suggestion to analyze different layers and channels. We conduct additional experiments as shown in the following Table. We can see that our framework tends to favor a wider or deeper network. However, to extensively explore an optimal network architecture is non-trivial and we leave it for our future work. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | 8 layers, 512 hidden channels | 9.13 | 4.311 / 0.954 | | 8 layers, 1024 hidden channels | 8.52 | 4.010 / 0.805 | | 13 layers, 512 hidden channels | 8.80 | 4.085 / 0.854 | | **MucRays** | **7.97**| **3.388** / **0.663**| **Q5: In L123, the paper says the network predictions "must satisfy a transformation equation", without defining what this equation is.** **A5:** Thank you for pointing it out. The transformation equation is originally presented in Appendix A.1.3. and we will move it to the main text in the next version. **Q6: Related work misses any discussion on space carving / visual hull (e.g., [Matusik 2000], [Kutulakos 2000]), ...... Related work also misses some recent papers on light field / ray-based representation (e.g. HyperReel [CVPR 2023], SRT [CVPR 2022], SIGNET [ICCV 2021]).** **Q7: L237 typo: "RTX 30390 GPU".** **A6-A7:** Thank you for sharing the related works. We will include and discuss them in our next version. Typos will be corrected as well. **Q8: In Table 5, the performance drops significantly after removing the classifier ......** **A8:** Responded in **Q2**. **Q9: If the baseline methods do not involve a classifier ......** **A9:** Responded in **Q2**. **Q10: In general, how does the ray parametrization affect the performance? ......** **A10:** Responded in **Q3**. --- Rebuttal 2: Comment: The response from the authors has sufficiently addressed the issues raised in the initial review. The final rating is updated to"Accept". --- Rebuttal Comment 2.1: Title: Thanks Comment: We thank the reviewer's time to read our rebuttal materials and the very positive rating. -Authors.
Summary: The paper proposed a ray-based neural rendering method that is able to achieve good reconstruction quality from depth maps or RGBD inputs using only one network evaluation per pixel during evaluation. The method requires two networks: a ray-surface distance network, and a dual-ray visibility classifier. In the first stage, the visibility classifier is trained on the multi-view depth maps. In the second stage, the ray-surface distance network can be trained with both ground truth rays from the depth maps, and random sampled novel rays. the visibilities of the random rays are determined by te visibility classifier, and only the distance of the visible rays are supervised. The proposed approach is extremely fast, and achieves competitive quality on three RGBD datasets. Strengths: * The proposed two-stage method is novel. * The proposed method is extremely fast during evaluation -- only one network evaluation is needed per ray. * The method achieves competitive performance on both synthetic and real datasets. * The paper provided useful insights on the derivation of surface normals from the proposed ray distance field. Weaknesses: * The method requires ground truth depth information for training, which is not the case for light field works such as [57]. This can significantly limit its use cases. * There lacked comparisons with some good performing light field methods such as [57]. * The method relies on the visibility classifier to generalize to novel rays, which can potentially be unreliable. * Ther performance of the proposed method is lacking, especially according to the results in the appendix. * It might help understand the effect of visibility network better if there are visualizations of the visibility network predictions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: There appear to be some fixed-pattern noise artifacts in the demo video. I wonder what is the cause? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations and broader impacts of the works are adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address the main concerns below. **Q1: The method requires ground truth depth information for training, which is not the case for light field works such as [57]. This can significantly limit its use cases.** **Q2: There lacked comparisons with some good performing light field methods such as [57].** **A1-A2:** The primary goal of our method is to model 3D surface geometry, instead of learning radiance fields for novel view RGB rendering. LFNR [57] is an image-conditioned light field method for RGB rendering, which is dramatically different and not directly comparable with ours. Unarguably, both pipelines have their own merits and applications, and our method has great potential in robotics mapping, navigation, obstacle avoidance, etc. Our method indeed requires depth supervision in the current submission. However, we argue that, with the advancement of existing techniques of depth estimation from RGB images, it is quite feasible to obtain sparse depth signals using existing techniques such as SfM or learning-based monocular depth estimators. In this regard, we additionally provide experimental results of our method using sparse depth values in supervision. From the following Table which reports the metrics on *Lego* in Blender dataset, we can see that our method still achieves satisfactory performance even though there are only 1\% depth values in training. We hypothesize that such robustness comes from our simple multiview consistency constraint, because many depth values in the training set may be redundant thanks to our effective classifier. Figure 4 in the submission file shows the qualitative results. We will include these results in the next version. |Sparsity of depth supervision| DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | 1% | 8.69 | 1.517 / 0.937 | | 5% | 8.55 | 1.522 / 0.928 | | 10% | 8.06 | 1.503 / 0.952 | | **100% (MucRays)** | **7.98**| **1.095** / **0.702**| **Q3: The method relies on the visibility classifier to generalize to novel rays, which can potentially be unreliable.** **A3:** This is a very interesting point. To further evaluate the reliability of our pipeline, we conduct a series of experiments to add different levels of noise into our classifier. In particular, we directly add a random noise drawn from a normal distribution (0, $\sigma$) to the visibility score with a clip between 0 and 1, and then use the noisy score in our multiview consistency loss $\ell_{mv}$ in Eq. 3 to optimize our ray surface distance network. The following Table shows the accuracy and F1 scores of our visibility classifier and the final DAE/CD scores on *Lego* in Blender dataset. We can see that, once the accuracy of the visibility classifier degrades, the reconstruction performance decreases sharply due to the large amount of mismatched training signals. This highlights that our classifier plays a crucial role in the pipeline. Figure 5 in the submission file shows qualitative results. We will include these results in the next version. |Noise level ($\sigma^2$)|Acc. (\%)$\uparrow$ | F1 (\%)$\uparrow$ |DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:|:-----:|:-----:| | 1 | 62.56 | 59.06 | 23.09 | 2.713 / 0.960 | | 0.5 | 63.99 | 67.76 | 21.16 | 2.248 / 0.889 | | 0.1 | 68.01 | 72.59 | 16.83 | 1.850 / 0.860 | | **0 (MucRays)** | **87.74** | **85.24** | **7.98**| **1.095** / **0.702**| **Q4: The performance of the proposed method is lacking, especially according to the results in the appendix.** **A4:** Our method is extensively evaluated on three public datasets together with a series of ablation studies to verify the effectiveness of our design, achieving a clear advantage in surface point regression both in terms of accuracy and efficiency. We are always open to conducting more experiments at the request of reviewers. **Q5: It might help understand the effect of visibility network better if there are visualizations of the visibility network predictions.** **A5:** We highly appreciate this suggestion. To better understand the classifier, we initially present quantitative results in Table 3 in Appendix. In addition, we further provide qualitative results of the classifier (query point + sampling rays with visibility predictions) in Figure 6 in the submission file, which will be added to the paper in the next version. **Q6: There appear to be some fixed-pattern noise artifacts in the demo video. I wonder what is the cause?** **A6:** Thank you for pointing out this issue. The artifacts are caused by the choice of spherical coordinate. As also suggested by Reviewer sym2, we conduct additional experiments given two different choices of ray parameterizations on the Blender dataset. As shown in the following Table, our applied spherical coordinate can achieve the best DAE scores, and the caused artifacts can be easily fixed by using PRIF's ray parameterization as shown in Figure 7 in the submission file. We leave it for our future work to fully address such an issue. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | LFN param. + (Ray-surface Distance Network + Visibility Classifier) | 89.53 | 82.102 / 10.298 | | PRIF param. + (Ray-surface Distance Network + Visibility Classifier) | 8.42 | 3.409 / **0.621** | | (LFN param. + Ray-surface Distance Network) + (Our param. + Visibility Classifier)| 91.30 | 142.570 / 74.345 | | (PRIF param. + Ray-surface Distance Network) + (Our param. + Visibility Classifier) | 8.44 | 3.526 / 0.653 | | **MucRays** | **7.97**| **3.388** / 0.663| --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I would like to thank the authors for the clarifications, as well as the extra experiments and figures. I decide to keep my rating of weak accept. Although the paper has proposed an interesting way to model 3D geometry, its drawbacks are also quite significant: it requires depth maps to train, and the quality does not stand out among baselines. --- Reply to Comment 1.1.1: Title: Thanks Comment: We thank the reviewer's time in reviewing our rebuttal materials and providing valuable feedback. We agree to the desirability of reconstructing geometry solely from RGB images. Nevertheless, considering the range of input modalities (RGB, Depth, Sparse Point Clouds, etc.), along with the requirements for high output surface accuracy and efficient rendering, it is a significant challenge to achieve a good balance. While our method indeed utilizes (sparse) depth values in training, it achieves the best reconstruction accuracy over the existing OF/SDF/UDF/NeRF/LFN/Distance based methods under equivalent depth supervision. In addition, it's 1000x faster in rendering views than prevailing coordinate-based methods (OF/SDF/UDF/NeRF). There are also challenges remaining, including training without poses, training with RGB alone, generalizing to multi-scenes, enhancing network backbones, expediting training, and more. We hope that our paper could inspire more advanced methods to tackle all these challenges in the future. On the whole, we highly appreciate the reviewer's efforts in improving our manuscript and fostering thought-provoking discussions.
Summary: This paper proposes a framework MucRays for 3D shape representation. Specifically, the authors formulate 3D shapes as ray-based neural functions and incorporate multi-view geometry consistency to improve the performance. For the learning of multi-view geometry consistency, an auxiliary network is introduced to classify the mutual visibility of two sampled rays. Experiments are conducted on various datasets, showing good performance. Strengths: 1. The paper is easy to follow and understand. 2. The authors propose an effective framework for 3D shape representation, showing good performance in various datasets. Weaknesses: 1. This paper only compares rendering time, but training/optimization time is not compared. 2. The ablation study is not convincing. The authors should compare with and without postprocessing (Section 3.2). 3. Table 5, the performance of "w/o classifier" is quite worse than the full model, and the error is very high. It would be better to discuss some possible reasons. 4. 2-stage training is not elegant. Is that possible to train these two networks simultaneously? 5. Line 57, "ray-surface distance field" representation is introduced by previous works and cannot be summarized as one contribution of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Negative impacts are mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and address the main concerns below. **Q1: This paper only compares rendering time, but training/optimization time is not compared.** **A1:** Thanks for the suggestion. The following Table compares the average training time of our method and all baselines on each scene of Blender dataset, using a single NVIDIA RTX 3090 GPU card and a CPU of AMD Ryzen 7. Since we apply a two-stage training strategy, our method is not as fast as baselines during optimization. Nevertheless, once our ray-surface distance network is optimized, it can achieve superior efficiency in rendering novel views, as shown in Table 1 in the main text. We will clarify this point in the next version. | | time (hours) | |--------------------|:--------------:| | OF | **0.2** | | DeepSDF | 0.6 | | NDF | 2.1 | | DS-NeRF | 22.8 | | NeuS | 10.2 | | LFN | 0.9 | | PRIF | 2.2 | | **MucRays (Ours)** | 24.9 | **Q2: The ablation study is not convincing. The authors should compare with and without postprocessing (Section 3.2).** **A2:** Section 3.2 does not mention any postprocessing step. In Section 3.4, we discuss the removal of potential outlier 3D points with the aid of our derived closed-form surface normals. Note that, this postprocessing step is only applied to clean the reconstructed 3D point clouds when calculating CD scores. We do not use any postprocessing step when calculating all DAE scores. As shown in the following Table, we conduct additional experiments w/ and w/o the outlier removal step on the Blender dataset. Figure 2 in the submission file shows the qualitative results. We will slightly rephrase lines 197-198 in Section 3.4 to clarify how this simple postprocessing step can be used when obtaining explicit 3D point clouds. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | w/o post-processing|**7.97** | 4.157 / 0.777 | | **MucRays (w/ post-processing)** | **7.97**| **3.388** / **0.663**| **Q3: Table 5, the performance of "w/o classifier" is quite worse than the full model, and the error is very high. It would be better to discuss some possible reasons.** **A3:** This is a very good point. The key issue of a naive ray based distance function is the lack of generalization ability across novel views during testing, fundamentally because training a single ray based network (without classifier) tends to fit all ray distance data pairs of the training set, but cannot guarantee the consistency of surface distances between seen (training) rays and unseen (testing) rays. However, with the help of our well-trained dual ray visibility classifier, the ray distance network must satisfy a transformation equation (Appendix A.1.3). This is easily achieved by our multiview consistency loss function $\ell_{mv}$ (Eq. 3 of main text), which drives the consistency of surface distances between (unlimited) seen and unseen rays. We show the qualitative comparisons in Figure 3 in the submission file. **Q4: 2-stage training is not elegant. Is that possible to train these two networks simultaneously?** **A4:** We agree that an ideal strategy is to train both networks simultaneously. As shown in the following Table and Figure 3 in the submission file, we simply train our two networks at the same time on the Blender dataset. However, not surprisingly, the performance of one-stage training drops noticeably, primarily because the classifier is inaccurate at the early stage and unlikely to provide effective constraints for the ray distance network given the similar number of training steps. Nevertheless, it is an interesting direction for our future work. | | DAE$\downarrow$| CD ($\times10^{-3}$) $\downarrow$ (mean / median) | |--------------------|:---------:|:-----------------:| | one-stage training | 12.41 | 4.032 / 0.659 | | **two-stage training** | **7.97**| **3.388** / **0.663**| **Q5: Line 57, "ray-surface distance field" representation is introduced by previous works and cannot be summarized as one contribution of this work.** **A5:** Thank you for this advice. We will rephrase lines 57-58 or alternatively combine lines 57-60 in the next version. --- Rebuttal Comment 1.1: Title: Waiting for Discussion Comment: Dear reviwer fezd, Thank you again for your initial valuable feedback on our manuscript. While we understand that your time is very demanding, we are still waiting for your thoughts on our rebuttal materials (summarized in the "Author Rebuttal by Authors" section). Regarding all your concerns including the training speed, more ablation studies, single-stage training, etc., we believe they are all clearly addressed above. We would greatly appreciate any additional comments you could provide. Your time and consideration are highly valued. Regards, Authors
Rebuttal 1: Rebuttal: We appreciate all insightful comments. After carefully improving the quality of our work, we present a document containing additional experimental results. And the reponses to the comments include: - Clarification of our dual-ray visibility classifier. - Evaluations of our method using sparse depth supervision. - Comparisons of our method using different ray parameterizations. - Comparisons of adopting our dual-ray visibility classifier to other baselines. - Additional ablations on our dual-ray visibility classifier, post-processing, and the training strategy. - Additional ablations on our ray-surface distance netowrk, including network architecture and thin structure reconstruction. - Additional baseline comparisons. Pdf: /pdf/71b372b780a1d9c72da9d395333cb550f51cfa17.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
No-regret Algorithms for Fair Resource Allocation
Accept (poster)
Summary: The paper studies an online allocation problem where the goal is to find \alpha-fair solutions. The authors provide an algorithm that provides a constant approximate regret to this problem, along with a non-tight lower bound that improves a previous known result. The suggested algorithm is compared with other approaches over synthetic and real data on the fair caching and fair scheduling problems. Strengths: - Well-written paper -Important problem with real life applications -Technically sound and strong submission. Nice techniques are used. Weaknesses: -The upper and lower bounds are not tight. -The lower bound that is provided is a slight improvement from a previous known lower bound which indicates that c_{\alpha} >1, as shown in Figure 7 in the appendix. -The experimental results are not very enlightening. It would be nice if the authors highlight what we really learn from them, provide an intuitive explanation why for smaller values of \alpha their algorithm is outperformed from other algorithms with respect to the average case etc. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Something is inconsistent with respect to the line 5 of Algorithm 1 and the description in lines 230-23. In my understanding, in round t, the agent divides x_i(t-1) with R_i(t-1). In the algorithm it seems that indeed you do that, but in the description, you are saying that essentially x_i(t-1) is divided by R_i(t-2). Which one is the correct? Codes of the experiments are not included in the supplementary material! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their feedback, which we address below. [$\textbf{On the experimental results}$]: The goal of our algorithm is to provide fairness (i.e., to roughly ensure that hit rates of all users are as close to each other as possible) without significantly sacrificing the average hit rate. As $\alpha$ is increased, the algorithm should tend towards prioritizing fairness over average hit rate. Our experiments demonstrate that the algorithm indeed succeeds in these goals. For both the synthetic dataset and CDN dataset, we see that our algorithm achieves comparable average hit rate to the algorithm of Si Salem et al. at small $\alpha$, while somewhat significantly outperforming on average hit rate for larger $\alpha$. At the same time, the algorithm outperforms on the fairness metrics for the full range of $\alpha$ on the CDN dataset. On the synthetic dataset, the two algorithms are comparable in terms of fairness — neither one consistently outperforms the other. Our algorithm consistently performs within the range of the offline optimal allocation — which is a theoretical baseline that cannot be run in practice (as it would require knowing the full sequence of requests ahead of time). This offline optimal solution gives an upper bound on the performance of any online algorithm. Finally, our algorithm is comparable or better in terms of average hit rate, but, as expected, stronger in terms of fairness metrics, as compared to algorithms like LRU and LFU which do not try to ensure fairness. Regarding performance for small $\alpha$: we disagree that our algorithm is consistently outperformed in this regime. It performs very similarly to the Si Salem et al. algorithm, and in fact outperforms the LRU and LFU baselines. It is only outperformed by the offline optimal solution, which as discussed is meant to give a theoretical upper bound on performance and cannot be implemented in the online setting. [$\textbf{Question on Algorithm 1}$]: In line 5, we indeed divide $x_i(t-1)$ by $R_i(t-1)^\alpha$. Thanks for pointing out this typo — we will correct the description in the revised version. [$\textbf{Releasing Codes}$] We will release the code upon acceptance of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am convinced about the merit of the experiments. After reading the other reviews and the author response, I will revise my score from a 6 to a 7.
Summary: The paper studies a general online resource allocation in which there are $m$ agents and a limited resource to be allocated over T rounds. The goal is to achieve sublinear regret for the aggregate utilities of the agents when compared to the optimal fixed offline allocation policy. In the online problem, at each round an agent has a demand vector; first, the decision-maker decides on an allocation according to some policy, and then the aggregate demand vector is revealed. An unrestricted adversary fixes the demand vector (this assumption departs from previous works). The utility for any agent is given by a concave utility function controlled by a parameter $\alpha$, which varies between 0 and 1 and expresses the trade-off between fairness and efficiency, going from a very equitable allocation to the classic problem of maximizing social welfare without looking at fairness. The NOFRA problem is defined, along with mild assumptions on the demand and allocation vectors. A main challenge comes from the fact that the objective function, which is the sum of the agents’ utilities, is not additive and, thus, not separable across time. Instead, we have to optimize in a “global” way. The objective, as mentioned before, is to design an online resource allocation policy that minimizes regret. The contributions of the paper include (1) an efficient online resource allocation policy called OFA (Online Fair Allocation) that achieves approximate sublinear regret for any given $\alpha$ in the utility functions, (2) a (non-tight) lower bound (that also improves over previous work) on the approximate factor needed to attain sublinear regret for any online learning policy, and, (3) numerical simulation results with several baselines for the fair caching problem. Strengths: - The treatment of the problem is fairly complete, and, despite not obtaining a tight bound, it is important that the authors establish that indeed some notion of approximate regret minimization is needed to achieve any sublinear bound over time. - Technical side: A key step is lemma 1, which introduces a surrogate problem, whose regret upper bounds the regret of the original one, and it can be solved as an online linear optimization problem. This is an elegant way to overcome the fact that the objective is not additive over rounds. The introduction of the technique to control the norm of gradients is also nice. - The comparison with previous work is clear, the improvements and technical innovations are also clear, so overall the authors do a good job in convincingly placing their work in the literature. I also find quite interesting the transitions in the regret bound for different values of $\alpha$. It is also positive that it recovers the known regret bound when $\alpha=0$. Weaknesses: - Some of the experiments are not very convincing to make the theory stand out more. For example, assuming that the average hitrate is what we would consider more important, there’s not any improvement (in the real-world dataset it’s actually performing worse) compared to the Si Salem et al. baseline. Something similar happens also with the fair scheduling dataset in the appendix. - It would make the results much stronger if there would be some indication that the techniques hold beyond the specific choice of $\alpha$-fair utility function. How are these techniques going to be generalized? It’s good that the authors give some examples, but is the proposed utility function capturing, at least qualitatively, how agents evaluate their reward? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why examine $\alpha>1$? What does a utility function now mean? - Why at Jain’s index is your policy better than opt offline? Should that be interpreted in some way? - How do you think that your techniques could be generalized to other, more general concave utility functions? Can you give some extra reasoning of why you chose this particular form of utility function? Minor comments: - In p.19 (p.7 of the appendix), erase the comment left there. - Maybe you want to define somewhere the hitrate notion used in the experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and questions, which we respond to below. [$\textbf{On the significance of the experimental results}$]: Although there is no major improvement in the average hit rate, the goal of our algorithm is to provide fairness (i.e., to ensure that the hit rates of all users are as close to each other as possible) without significantly sacrificing the average hit rate. To empirically show this fairness, we plot the minimum hit rate and Jain's Index (which is a standard notation of fairness that is maximized when all users have the same hit rate). It can be seen from the experiments that our algorithm has a higher Jain's Index and minimum hit rate than the baseline algorithms without sacrificing significantly in terms of average hit rate. [$\textbf{The alpha >1 regime}$] To intuitively understand the implication of the utility function for $\alpha > 1$, recall that the alpha-fairness function $\phi$ is proportional to $\sum_i \frac{1}{R_i(T)^{\alpha-1}}$. As $\alpha$ grows very large (i.e., much larger than $1$), this sum is dominated by $\frac{1}{\min_i R_i(T)^{\alpha-1}}$. Hence, maximizing this utility function for large $\alpha$ becomes nearly equivalent to maximizing the minimum cumulative reward. This is a natural fairness objective. Generally, increasing $\alpha$ from $0$ to $\infty$ yields a continuum of objective functions that increasingly weight fairness as compared to total utility. [$\textbf{Jain's Index}$] As described in line 315, Jain's Index is a metric that quantifies if the users are receiving a fair share of resources. In the context of fair caching, Jain's Index quantifies if the users are receiving similar hit rates, but it does not take into account whether the average hit rate is maximized. The Jain's Index of a policy can be high even when the average hit rate is low as long as the hit rates of individual users are close to each other. For example, in Figure 3, even though Jain's Index of our policy is higher than the optimal offline, the average hit rate is lower. Jain's Index does not capture the overall performance of a policy. The main reason we plot Jain's Index is to demonstrate that our policy (as well as the offline optimal) become "fairer" as the $\alpha$ parameter is increased. Comparing Jain's Index of different policies might not necessarily lead to any concrete conclusion. [$\textbf{Extension to General Concave Utility functions}$] The greedy policy can be extended straightforwardly for a general concave utility function $\phi$ by replacing the quantity $\frac{1}{R_i^\alpha}$ by $\phi' (R_i)$ in line 5 of Algorithm 1. However, with regards to the regret analysis, note that in Eq (10), we explicitly use the positive homogeneity property of the $\alpha$-fair utility function. Hence, we do not see an immediate way to extend the analysis for a general concave utility function. Nevertheless, in the special case, when the utility function $\phi$ is a sum of $\alpha$-fair utilities for different constant $\alpha$'s, the analysis goes through. It would be interesting to give even more general regret bounds for concave utilities. [$\textbf{Minor Comments}$] Thanks for pointing these out. We will make the necessary changes in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response.
Summary: This work studies an abstract fair resource allocation problem. It abstracts out problems such as cache and job scheduling. The notion of fairness considered is alpha-fairness (in the range of alpha between 0 and 1), which has been previously studied and is known to encapsulate many other fairness notions. The paper provides sublinear approximate regret bounds for the abstract fair resource allocation problem through a clever reduction to online linear optimization, and analyzes it using prior work on online gradient descent in conjunction with a bootstrapping technique. The approximation constant is small (around 1.45) and they further show that the notion of approximate regret is necessary for this problem (i.e. that sublinear regret bounds require an approximation constant larger than 1). They also back up their theoretical results with experiments that compare to previous work and other standard benchmarks showing that their algorithms achieve good efficiency-fairness tradeoffs in comparison. Strengths: 1- The approach of replacing the problem of interest with a surrogate problem that can be solved via more standard regret minimization techniques and relating the regrets of the two problems is a nice and novel contribution, as is the bootstrapping technique used to prove regret bounds for the surrogate problem. 2- It is nice that such results can be obtained without significantly restricting the adversary. Weaknesses: 1- Assumption 2 seems reasonable for many problems, but Assumption 1 (that asks that demand vectors always be bounded away from 0) seems more restricting. Additionally, regret bounds seem to depend on the constants in these assumptions. 2- The paper discusses alpha-fairness for alpha between 0 and 1, but for alpha-fairness to encompass more fairness notions, larger alpha is relevant as well, and is unaddressed by their results Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1- In the proof of lemma 2, I’m not sure I understand what happens to the summation over i in equation 19. For example, when the regret is lower bounded by 1 for every user at every timestep (eqn 21), wouldn’t substituting in equation 19 give you O(sqrt(mT)) regret? Not sure if I’m missing something. Similar questions strike me in Case 1 and 2 in pages 18 and 19. I also don’t fully understand these cases and how they follow from equation 26, so would really appreciate more explanation. 2- Some more discussion on alpha-fairness and exposition on why it’s a good notion of fairness for these problems would be good. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their review and their questions, which we address below. [$\textbf{Assumption 1}$] We agree with the reviewer and believe that it might be possible to establish similar regret bounds without Assumption 1. Please keep in mind that, for many problems, e.g., the caching problem, where at least one file is requested per round, Assumption 1 is naturally satisfied with $\delta=1$. One possible way to remove Assumption 1 is to divide the entire horizon into two phases – a regular phase where Assumption 1 is satisfied and a special phase where the demand vectors are small and Assumption 1 is not satisfied. The regret analysis for the regular phase proceeds in the same way as given in the paper. Next, one could argue that since the demand vectors are small for the special phase, it contributes only a tiny amount to the overall regret. We reserve this extended analysis for future work. [$\textbf{Larger alpha}$]: Note that in our formulation, $R_i(T)$ is defined as the total cumulative reward of agent $i$, which grows linearly with time T. As we mentioned in Remark 1 (page 4, line 146), a sublinear regret bound becomes vacuous for any $\alpha >1$. Hence, we did not consider larger values of $\alpha$ in this paper. We agree though that establishing theoretical guarantees in the $\alpha > 1$ under some reasonable performance metric would be an interesting direction for future work. [$\textbf{Response to Q1}$]: For the simplicity of exposition, the $O(\cdot)$ notation hides dependencies on all parameters (including the number of users $m$) except the time horizon $T$. Also, please note that the variable $R_i(T)$ denotes the cumulative rewards accrued by $i$ th agent up to time T (not regret). [$\textbf{Intuitive explanation for bounding the L-Regret}$]: In a nutshell, the bootstrapping technique used in the proof of Lemma 2 is an algebraic exercise of simultaneously bounding the order of growth of the dependent sequences $\{R_i(t)\}$ and ${L-Regret}_{T}$ under the action of the proposed online policy. We initially start with a crude bound for both sequences and then successively refine these bounds by exploiting their interrelations. To be precise, we use Eq (19) for bounding the L-regret_T for any arbitrary round $T\geq 1$. From Eq (19), it follows that ${L-Regret}_{T}$ depends on the value of the cumulative reward sequence $\{R_i(t)\}$ – the larger the values of these variables, the tighter becomes the regret bound. Hence, the proof proceeds by establishing tight lower bounds for the cumulative reward sequence $\{R_i(t)\}$. This is established via Eq (26) which gives a lower bound to the cumulative reward accrued up to any round T. Let us now consider Case -I $(0\leq \alpha \leq 1/2)$. Since $L-Regret_T = O(\sqrt(T))$ from Eqn (22), Eqn (26) implies that $R_i(t)$ increases linearly with $t$, substituting this bound in Eq (19), we conclude that $L-Regret_T= O(\sqrt(\sum_t (\frac{1}{t^{2\alpha}})))$. We finally arrive at the stated regret bound upon summing this series. A similar explanation holds for Case II as well. [$\textbf{Response to Q2 - Motivation for alpha-fair utility function}$]: We thank the reviewer for the suggestion. In our response to Reviewer 1 above, we have provided a brief discussion on the Motivation for using the alpha-fair utility function. We will include this discussion in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. My concerns are addressed.
Summary: The paper consider a fair resources allocation problem in the setting of unrestricted adversary, which is called a generic online fair resource allocation (NOFRA). An OFA policy is proposed with reasonable theoretical guarantees. Strengths: The paper present an online fair allocation algorithm, which approximately maximizes the aggregate $\alpha$-fairness function. The lower bound of approximation factor is established. Weaknesses: I have several concerns in the problem formulation. 1. Why the initial condition $R_i(0) = 1$ instead of being 0? Maybe it is due to technical requirements? 2. To me, the motivation of using concave $\alpha$-fair utility function is not strong. 3. $c$-regret is adopted in the paper. To me, I do not quite understand why it is a good metric to evaluate policy. It seems it is merely for technical purposes? 4. Can author also highlight its technical novelty? Is Lemma 1 the most challenging? 5. The topic is about fair resource allocation. But the paper seems to be loosely connected with fairness bandit. The definition of fairness here is also not clearly specified. The related literature should be included. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness part. I feel like the main content of this paper is to formulate the problem via using a different regret objective. It is very loosely related to usual fairness bandit. The writing should be improved. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. In the following, we address each of the comments in the same order. 1. [$\textbf{Justification for counting cumulative rewards from 1}$]: The initial value of $R_i(0)$ is set to $1$ instead of zero to make sure that the derivative of the $\alpha$-fair utility function $\phi(\cdot)$ remains finite at any round - a condition which is required by any online learning policy. Note that $\phi'(R_i)= 1/R_i^\alpha.$ Hence, the derivative of the utility function becomes infinite when $R_i=0.$ Thus, if we set the initial value of $R_i$ to one, and since R_i is a monotone nondecreasing function of time, this technical issue can be avoided. Another equivalent way to avoid this problem is to change the utility function to $\phi(x)= \frac{(1+x)^{(1-\alpha)}}{1-\alpha}$ and set $R_i(0)=0.$ However, we opted for the former option to avoid a more cumbersome expression for the utility function. 2. [$\textbf{Motivation for alpha-fair utility function}$]: We want to emphasize the $\alpha$-fair utility function is a standard utility function that enjoys several interesting technical properties and is heavily used in network resource allocation (see for example [1-2]). We also refer the reviewer to [3], which gives an axiomatic characterization of a fair utility function and shows that the alpha-fair utility function comes out naturally from the axioms. Other utility functions, e.g., proportional fair and min-max utilities, can be shown to be a limiting form of alpha-fair utility. 3. [$\textbf{The c-regret metric}$]: Regret is a standard metric to evaluate the performance of any online policy. However, as we show in Theorem 2, no online policy can achieve a sublinear regret for the problem that we study in this paper. The metric c-Regret generalizes the usual regret metric where the online policy achieves a sublinear regret against $(1/c)$-fraction of the utility achieved by a static adversary. The smaller the value of c, the stronger becomes the performance guarantee of the online policy. The notion of c-regret has also been extensively used in the online learning literature (see, e.g., the papers [4-6]). This is why we chose c-regret as a performance metric in this paper. 4. [$\textbf{Technical Novelty}$]: Both Lemma 1 and Lemma 2 are non-trivial results. Lemma 2 introduces a new successive refinement argument to obtain a tight regret bound. 5. [$\textbf{Related work on fair bandits}$]: In our current related work, there is a paragraph that covers related work in the context of multi-arm bandits (Lines 510-521). We will expand this with additional recent results. [1] Mo, Jeonghoon, and Jean Walrand. "Fair end-to-end window-based congestion control." IEEE/ACM Transactions on Networking 8.5 (2000): 556-567. [2] Li, Tian, Sanjabi, Maziar, Beirami, Ahmad, and Smith, Virginia. "Fair Resource Allocation in Federated Learning." International Conference on Learning Representations (ICLR). 2019. [3] T. Lan, D. Kao, M. Chiang and A. Sabharwal, "An Axiomatic Theory of Fairness in Network Resource Allocation," 2010 Proceedings IEEE INFOCOM, San Diego, CA, USA, 2010, pp. 1-9, doi: 10.1109/INFCOM.2010.5461911. [4] Azar, Yossi, Amos Fiat, and Federico Fusco. "An $\alpha $-regret analysis of Adversarial Bilateral Trade." Advances in Neural Information Processing Systems 35 (2022): 1685-1697. [5] Paria, Debjit, and Abhishek Sinha. "$\texttt {LeadCache} $: Regret-Optimal Caching in Networks." Advances in Neural Information Processing Systems 34 (2021): 4435-4447. [6] Emamjomeh-Zadeh, Ehsan, Chen-Yu Wei, Haipeng Luo, and David Kempe. "Adversarial online learning with changing action sets: Efficient algorithms with approximate regret bounds." In Algorithmic Learning Theory, pp. 599-618. PMLR, 2021.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Back-Modality: Leveraging Modal Transformation for Data Augmentation
Accept (poster)
Summary: The paper introduces a novel approach to data augmentation that leverages recent advances in generative models. It presents a general framework and experiments with three settings for image, text, and audio tasks, showing promising results for their Back-Modality approach. Strengths: - The idea is simple but interesting and effective based on the presented experiments. - The method is generic and can be extended to other modalities, and it can also incorporate other models as the field evolves. - Case and ablation studies show the benefits of using the method and support most of the claims in the paper. Authors also compute statistical significance, which is an important step to validate results. Weaknesses: - Experiments are only done with one small model for each task. It would be interesting to see the gains from increasing the model size and if the benefits would increase or diminish. - The setup explanation is a bit confusing. For example, when explaining the data-scarce scenarios, it seems like n-shot is used for the number of generated samples per class during training. Still, there is no information about the training set size after sub-sampling or the label distribution on the undersampled training set. It would be better to state the setup clearly, e.g., we have X instances in the training set, with Y classes, and we generate Z samples per class. While most of it can be inferred by reading the appendix and doing some math, the paper would be better presented with more clear numbers. - The paper could have more experiments testing the limits of how much data could be generated by these methods that would still yield good improvements. Also, a high resource scenario experiment would be useful. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: There were two plots that I missed in this paper that would have made it stronger and more insightful. The first is a plot with less sub-sampling of the original data. Maybe with 5, 10, 20, 40, 60, 80, 100% of the training data + the generated instances. Would the proposed method still be useful if I have a lot of data? Or just if I don't have data at all? The second plot would be to go further on the generation. Experiments go all the way to 10-shot, but what if we could generate 100, 1000? Is 10 the limit for this approach? During the detailed strategies, you share some tricks that made your model work better, like adding the image label to the prompt. How much of an issue to the final results were the problems you addressed in that section? Could it make it harder to expand the technique to other settings? It would have been interesting to add the original results to the appendix so we know how many problems that was causing. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the authors have a limitations section. Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes Flag For Ethics Review: ['No ethics review needed.']
Rebuttal 1: Rebuttal: Dear Reviewer 44qx, We would like to thank you for your thoughtful review and constructive feedback on our paper. We appreciate your recognition of the contributions and strengths of our work. Here, we address the weaknesses and questions you have raised. ### **Weaknesses:** 1. **Experiments with One Small Model for Each Task:** We agree that experimenting with different model sizes could provide valuable insights. To mitigate this concern, we performed additional experiments with varying model sizes and include them in the final version of the paper. For instance, in image classification tasks using Resnet50, we observed a relative improvement greater than what was observed with a smaller version of Resnet. Similarly, in textual tasks, our experiments with bert-large showcased a relative improvement that exceeded that of bert-base. These findings indicate that in scenarios with sparse data, the data produced by our augmentation technique offers greater benefits to larger models. 2. **Setup Explanation and Clarity in Numbers:** I sincerely apologize for any confusion this might have caused.To clarify: When constructing the sparse dataset, we sample from each category. For instance, when we create a 10-shot dataset, we sample 10 unique data points from each class of the original data and then augment each data point five times. Using the 10-shot, back-captioning approach as an example: - In the original tiny-imagnet dataset, there are 200 classes, with each class containing 500 data points. - After sub-sampling, the dataset consists of 200 classes, with 10 data points per class. - With a 5-fold augmentation (as stated on lines 155-156, our default augmentation size is set at 5), this dataset still consists of 200 classes, with each class now having 50 data points. The term "shot" denotes how many samples are drawn from each class during sub-sampling, and the default augmentation multiple is 5. We understand the importance of clear presentation and will strive to state our setups more transparently in future iterations. Once again, I apologize for the oversight, and thank you for bringing it to our attention. We hope that this clarification will help in understanding our method better. ### **Questions:** 1&2. **Plots for Different Sub-sampling and Generation Rates:** We conducted experiments on our back-captioning method and have showcased the results in Figure 2 of the newly added PDF. For the first question, regarding the impact of sub-sampling: Due to computational and time constraints, the highest percentage of the original data used for experiments was 40%. Figure 2, sub-figure (a) illustrates the model's performance variation when increasing the amount of the original data while maintaining the augmentation multiple at 5. From the difference curve in this sub-figure, it is evident that as the volume of original training data increases, augmented data continues to yield benefits. However, the benefit shows a diminishing return as more original data is added. For the second question, on extending the generation: In the same Figure 2, sub-figure (b) represents that as the augmentation multiple grows, the model's performance improves. However, after a certain point, the gains tend to plateau. The maximum augmentation multiple tested in our experiments was 20 times. Thus, in summary: (1). Augmented data provides benefit even with an increase in original training data, but this advantage decreases as more original data is used. (2). Increasing the augmentation multiple improves model performance up to a certain point, after which the improvements plateau. The current experiment went up to 20 times, indicating that there's room for further exploration. 3. **Detailed Strategies and Potential Issues:** Firstly, to gauge the extent to which our strategies affected the model's performance, we conducted a comprehensive human evaluation on the augmented data. The evaluators consisted of five crowdsourced workers, with the final scores being an average of their evaluations. The results were indeed telling: For the images generated using the back-captioning method: - **With** our proposed strategies, the Label Invariance Score reached an impressive 99.2%. - **Without** the said strategies, the Label Invariance Score dipped to 84.7%. Similarly, for the sentences generated using the back-imagination method: - **With** our strategies, the Semantic Consistency Score was 98.8%. - In the absence of these strategies, the score fell to 94.1%. Upon closer inspection, the primary reason for the back-captioning method's inability to retain labels was that the image-captioning model failed to accurately describe the key labels for certain images. On the other hand, the back-imagination method's inconsistency in retaining semantics was because the diffusion model generated black and white images at times. This led to the subsequent back-captioning generation of descriptions like "A black and white photo of." The crux of the issue here lies in the limitations of the current open-source cross-modal models we utilized. While they've achieved significant success, they are not flawless and occasionally necessitate strategy-assisted filtering to improve their output. It's also worth noting that as the research in this domain rapidly progresses and cross-modal capabilities strengthen, the importance of such strategies will likely diminish. --- Rebuttal Comment 1.1: Comment: Thank you for the extensive and detailed response. I am more confident that this paper should be accepted now and will update my scores. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for recognizing and affirming our work. We sincerely appreciate your insightful feedback and recognition. Your comments during the review process have been pivotal in guiding improvements to our work. We are committed to integrating your suggestions into our updated version. Thank you once again for your invaluable insights and recognition. Best wishes, [Authors of Submission7875]
Summary: The paper introduced a new data augmentation technique, called back-modality. The augmentation is based on modal transformations. Specifically, instances in the original modality (e.g., image) are transformed to an intermediate modality (e.g., text), augmented in the intermediate modality and then transformed back to the original modality. Experimental results in a few-shot setting on three benchmark datasets show that the proposed augmentation technique produces better results than the base model and also than other existing augmentation techniques. Strengths: The proposed augmentation technique leverages recent advancements in transformations between different modalities. The strategy is rather general and can be applied to a variety of applications for which cross-modal transformations can be obtained. It does not require access to model weights or fine-tuning, and can thus be seen as "a variant of the Cross-Modal-Models-as-a-Service (CMMaaS) application". Experimental results clearly suggest the advantage of the proposed technique as compared to existing augmentation approaches on the three datasets used. Weaknesses: While the authors aim to produce a general augmentation strategy based on cross-modal transformations, their actual process involves some decision specific to the modalities considered in the study, meant to minimize the production of low-quality augmentations (and no augmentations are used for images as an intermediate modality). This can make the strategy hard to apply without extensive analysis of the augmentations. Some of the choices made are rather arbitrary, e.g. why 5 augmentations per instance? Or 5% of the data used to balance the training data? Shouldn’t the size of the training data be given by the number of shots per class? The discussion on diversity and affinity is very brief in the main paper. Some details are provided in the appendix. It is not clear why they were not included in the paper (the current paper has only 8 pages). Same for the details of the case study regarding the back-imagination and back-speech augmentations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: To minimize the low-quality back-captioning augmentations, the authors “explicitly inject the image labels into the text prompts, which leads to the generation of descriptions  that incorporate these finer-grained labels. “ It is not clear if the label information is used during testing, especially when the task is to predict the image label. Please elaborate. Some of the choices made are rather arbitrary, e.g. why 5 augmentations per instance? Or 5% of the data used to balance the training data? Shouldn’t the size of the training data be given by the number of shots per class? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors emphasize some limitations related to the size of the cross-modal models. Other limitations related to generalizability could be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## We thank Reviewer QqX9 for the comprehensive and insightful review of our paper. Below, we address the specific weaknesses and questions raised in the review: ### **Weaknesses:** **1. Decision Specific to Modalities:** While we understand the concern regarding the decisions specific to the modalities considered in the study, we would like to emphasize that the steps taken are important to ensure the quality of the augmented data. We include comprehensive guidelines in the manuscript, detailing how to adapt the method to various modalities. This inclusion facilitates the understanding and reproduction of our method and shows that a certain level of customization is often essential to adapt any machine learning method to specific applications. However, this does not impede the generalization of the method. As stated in the paper (lines 159-162), one reason for this decision is that images produced via certain augmentation techniques, such as random erasing and cutout, often present a substantial challenge to image captioning models (Specifically, OFA models). On the other hand, when human evaluation is performed on augmented samples, the combination of multi-imagination and multi-captioning appears to be sufficient to yield satisfactory results. Moreover, if there are more robust image captioning models in the future, the image augmentation method can indeed be used as part of our framework. **2. Arbitrary Choices in Augmentation Parameters:** The selection of these parameters was driven by our intention to provide a proof-of-concept that the proposed method is effective without overwhelming resource consumption. It's a deliberate trade-off: - **Too Small Augmentation Size**: If the augmentation size is too small, the effect of our back-modality technique may not be discernible. This could lead to a situation where the method's potential is underestimated or overlooked entirely. - **Too Large Augmentation Size**: Conversely, an excessively large augmentation size would consume more computational resources. While it might yield better results, it could also make the technique less accessible to researchers with limited resources, defeating the purpose of demonstrating a widely applicable method. Our choice of 5 augmentations per instance was therefore a calculated decision. We aimed to find a middle ground where the method's effectiveness could be demonstrated without significantly burdening computational resources. We were not pursuing the absolute best results but rather a pragmatic balance that underscores the method's viability and potential for various applications. We will include these justifications in the revised manuscript, ensuring that readers understand the rationale behind these choices. Thank you again for pointing out this area for clarification, and we believe this amendment will strengthen the overall contribution of our paper. **3. Lack of Detail on Diversity, Affinity, and Case Studies:** We appreciate the feedback on the brevity of certain sections in the main paper. We agree that a more comprehensive discussion of diversity, affinity, and case studies in the main text would enhance understanding. We will revise the manuscript to include a more comprehensive discussion within the main body, as well as more details on back-imagination and back-speech augmentations. ### **Questions:** **1. label information:** I would like to clarify the methodology regarding injecting image labels into text prompts, as it seems to have caused some confusion. The process of injecting these labels was employed strictly during the augmentation phase of the training dataset, and not during testing. Specifically, the label information was only used during training to guide the generation of descriptions for the data augmentation. We only augment the training dataset, ensuring that the original, unaltered testing set is used for evaluation. This method was carefully designed to ensure the augmentations were aligned with the correct classes, thereby enriching the training data with meaningful variations. During the testing phase, no such label information is injected. This was a deliberate choice to maintain a fair and unbiased evaluation of the model's performance on unseen data. It ensures that the experimental results are a genuine reflection of the model's ability to predict labels without any additional guidance. **2. why 5 augmentations per instance?** Addressed Above. **Or 5% of the data used to balance the training data?Shouldn’t the size of the training data be given by the number of shots per class?** In response to your inquiry, the concept of "shot" in our work indeed refers to how many actual data points from each class are available for training. This is central to the notion of few-shot learning, where the emphasis is on learning from a limited number of examples.In our experiments, we sample various datasets corresponding to different shots but meticulously ensure that the total quantity of each dataset stays within(≤) a 5% limit of the training dataset. Regarding your specific question about the 5% limitation, as stated in lines 165-166 of our manuscript, we "keep the sub-sampled set within 5% of the number of the training dataset." This approach was carefully devised to reflect real-world scenarios where data scarcity is a prevalent challenge. We hope that these clarifications address your concerns. We believe that with these details, the validity of our experiments and conclusions is evident. We greatly appreciate your feedback and are looking forward to your further comments. --- Rebuttal 2: Comment: I have read the other reviews and authors' responses. I appreciate the authors' efforts to thoroughly address all the questions raised. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, Thank you for your diligence in reading through the other reviews and our responses. We deeply appreciate your recognition of our efforts to address every question raised. Your feedback has been instrumental in helping us refine our work, and we're grateful for your guidance. Best wishes, [Authors of Submission7875]
Summary: This paper introduces a new method to perform data augmentation: Back-Modality. The augmentation process involves translating the data into another modality, perform augmentation in that modality, and translate each augmented other-modality instance back into the original modality. Each of the three steps could produce multiple augmented data. The paper included three instantiations of Back-Modality: Back-captioning (image -> text -> image), Back-speech (text -> audio -> text), Back-imagination (image -> text -> image). The cross-modal translations are done by pre-trained models. These instantiations are then evaluated on few-shot learning settings (1-10 instances per class) within their domains, and the experiments showed that the models trained with Back-Modality-augmented data outperforms those trained with no data augmentation or with other baseline data augmentation techniques. Additional ablation studies showed that augmentations at each of the three steps are necessary to achieve best result. Strengths: This paper presented a really interesting idea: to use cross-modal back-translations as data augmentation technique. The idea is straightforward and easy to follow, and the authors demonstrated that the approach works well when data is extremely scarce (1-10 examples per class), generating augmented data that is more diverse and helpful compared to other existing data augmentation methods. Weaknesses: A lot of key experiment details are missing (which I can't find in either main text or appendix), including but not limited to: (1) How many augmentations are generated at each cross-modal translation step and each other-modality-augmentation steps for every original data point (i.e. the multipliers at F,G,H for each instantiation)? The number of diverse augmentations is the most important detail for data augmentation experiments. (2) There is no description on how which caption augmentations are performed or which GPT model was used during the "augmentation with GPT" step. (3) How many augmentations are generated with each baseline augmentation methods? Do they match the number of augmentations with Back-Modality? Without these key details, it is difficult to assess the soundness of the experiment results and whether they support the conclusion. Another key concern for the new approach is how applicable it is in the real world. This technique relies heavily on the quality of the cross-modal translators, and these pre-trained models usually only work well in domains/tasks where data is not scarce in the first place (since training good cross-modal translators needs a lot of high-quality data). All experiments in this paper are done under artificially created data-scarce settings. In real life, if there is a task that really only have very scarce data (such as medical images for rare disease, etc), the existing cross-modal translators are likely not going to work well with them, as they were not trained with a lot of data like this. I would be more convinced about the practical usability of this technique if the paper included experiments (or just proof-of-concept ones) on performing Back-Modality on a real data-scarcity situation instead of artificially created data-scarcity on the most common domains. The presentation of the paper also needs improvement. There are not enough concrete examples of the data augmentations in the main part of the paper. Since there is still plenty of space in the main paper, perhaps adding a few illustrations of one instantiation from beginning to end would help (like the ones in the appendix). Also, the font sizes of the tables are inconsistent. The font sizes of Table 2,3,4 are really big while Table 1,5 are normal sizes. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) Please address the missing key experiment details as mentioned in the weakness section. (2) I assume that in Table 2,3,4, the number of "Shot" means how many real data points from each class are available. Is this correct? If so, what does the 5% subsampling mean under "Data-scarce scenarios" of section 3.2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors addressed the limitation of additional computation cost of Back-Modality. I believe there are a few additional limitations that may needs to be addressed, such as: (1) the requirement of existing cross-modal translation models that can handle the task-specific data well (2) the experiments in this paper are only done on artificially-created data scarcity situations and limited modalities Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer Usve,** Thank you for taking the time to review this paper and provide detailed feedback. We sincerely appreciate the insightful comments and the critical examination of our work. However, it seems that there may have been parts that were missing or misunderstood. Below we provide responses to the concerns and questions you have raised: ### **Weaknesses and Missing Experiment Details** 1. **Number of Augmentations at Each Step:** You may have missed certain specifics in the main text. On lines 104-105, we mention that "In practical implementation, we conduct uniform sampling at random on these augmented data to obtain the final augmentation dataset." Furthermore, as stated on lines 155-156, our default augmentation size is set at 5. Theoretically, the product of l, n, and m should exceed the required augmentation multiplier. For instance, in our experiments with Back-captioning, where l=n=m=2, for every image, we generate 8 (2x2x2) augmentations, and subsequently, we uniformly sample 5 of these as the final augmented data. For Back-imagination, where l=3, n=1, and m=3, for each sentence, we generate 9 (3x1x3) sentences, and then uniformly sample 5 as the final augmented data. 2. **Caption Augmentations and GPT Model Used:** In the main text, specifically on lines 156-158, we detail that for back-captioning, "We utilize the gpt-3.5-turbo model to augment captions, with the prompt being: 'Maintain the nouns in the following sentence intact and generate semantically diverse sentences.'" 3. **Comparison with Baseline Augmentation Methods:** As indicated in the main text on lines 153-156, unless explicitly stated otherwise, our default augmentation size is consistently set at 5, which is also applicable to the baseline methods, such as Random Erasing, Auto augment, EDA, Back-translation, etc. ### **Concerns about Real-World Applicability** 1. **Controlled and Reproducible Experimentation:** Firstly, the primary rationale behind our use of artificially created data-scarce settings was twofold: it provided a controlled environment for experimentation, and it ensured reproducibility. Reproducibility is a cornerstone of scientific research, and we wanted other researchers to be able to reliably reproduce our results without the inconsistencies of real-world situations. Our primary objective was to demonstrate the feasibility of our framework — showing that a data augmentation method for one modality can augment data in another modality. While the broader application in real-world scenarios is undeniably valuable, it wasn't the main focus of this paper. 2. **Generalizability and Domain Adaptation:** You pointed out concerns regarding the quality of cross-modal translators in real data-scarce situations. However, it's crucial to note the generalizability and domain-adaptation capabilities of cross-modal models. As evident in machine learning, models trained on extensive data can sometimes capture patterns useful even in sparse-data situations. The striking generalizability of large language models stands as a testament. The potential for future larger cross-modal models to demonstrate enhanced generalization cannot be discounted. Moreover, recent research has increasingly honed in on the domain adaptability of diffusion-based cross-modal models in scenarios with limited data, encompassing Few-Shot[1], One-Shot[2], Zero-shot[3], Domain Adaptation[4], and Unsupervised Domain Adaptation[5]. The advancements in these areas hold promise in allaying the concerns you've raised. Analogously, when reverse translation techniques in natural language processing were initially introduced, they were primarily applicable in the domain of data-scarce language translation. As technology advanced and more robust translation models were developed, they emerged as popular data augmentation techniques across various NLP domains. 3. **Proof-of-Concept Experiment:** Taking cue from your suggestion, we embarked on a proof-of-concept experiment. In collaboration with medical experts, we gained access to a diffusion model, akin to [6] and [7], that was trained on a public dataset. We use this model to generate X-ray images for Pulmonary Alveolar Proteinosis, a rare lung disorder. These synthesized images served as augmented data to train a lung disease image classification model, resulting in a performance uptick of 3.7%. [1]Few-Shot Diffusion Models [2]Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation [3]Zero-shot Medical Image Translation via Frequency-Guided Diffusion Models [4]PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion [5]One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models [6]RoentGen: Vision-Language Foundation Model for Chest X-ray Generation [7]Adapting Pretrained Vision-Language Foundational Models to Medical Imaging Domains ### **Presentation Improvement** 1. **Concrete Examples and Illustrations** Recognizing the importance of these details, we will relocate this content from the appendix to the relevant section in the paper. And we also create a new PDF dedicated to additional case studies. 2.**Font Sizes of Tables** We will promptly correct this issue to ensure consistency in the font sizes across all tables. ### **Questions** **Clarification on "Shot" and 5% Subsampling:** Yes, the number of "Shot" does refer to how many real data points from each class are available. line 165-166 “keep the sub-sampled set within 5% of the number of the training dataset.” We sample various datasets corresponding to different shots, but meticulously ensure that the total quantity of each dataset stays within a 5% limit of the training dataset, thereby crafting an authentic simulation of data scarcity. We hope that these clarifications address your concerns and we kindly request reconsideration of the rating after we implement the proposed revisions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! I think your responses addressed all of my concerns. I am happy to improve my scores. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your understanding and for re-evaluating our work. We deeply appreciate your feedback and are pleased to know that our clarifications met your concerns. We will be sure to integrate your valuable feedback into our revised version. We greatly value your expertise and guidance, and appreciate your responsible attitude. Best regards, [Authors of Submission7875]
Summary: This paper proposes a pipeline called Back-Modality, which uses cross-modal generation models for data augmentation. In practice, the original data in source modality is first transformed into an intermediate modality using a cross-modal generation model. Then the typical augmentation strategy for the intermediate modality will be applied. The augmented data in intermediate modality will be transformed back into the source modality using another cross-modal generation model in the reverse generation direction. Specifically, three implementations of Back-Modality are explored, including back-captioning, back-imagination, and back-speech. On image and text classification tasks, the proposed augmentation pipeline shows effectiveness. Strengths: This paper has the following strengths: 1. The motivation of Back-Modality is straightforward and the method is easy-to-implement, which has the potential to be widely used in future research works. 2. Three specific implementations of Back-Modality are proposed and experimented, proving the generalizability of this schema. 3. The cross-modal generation models used in this work is easy-to-access and has SOTA performance (OFA, stable diffusion), guaranteeing the effectiveness and reproducibility of this pipeline. 4. The experimental results are good compared with typical data augmentation methods, and data diversity is analyzed. Weaknesses: I think this paper will be much improved if the following issues can be more addressed: 1. Human evaluation on the augmented samples: Clearly, the method of Back-Modality generates more diverse data samples. If human evaluation can be facilitated to verify that the semantic meaning of the original sample are kept and the label of samples are still correct, it will be much better. 2. Verify robustness in adversarial setting (or some hard test samples): This work mainly conduct experiments on the scenario of limited training samples, this is definitely okay. Meanwhile, another aspect of data augmentation is the robustness on adversarial attacks. If more experimental results on adversarial setting or harder testing samples can be provided, the effectiveness of Back-Modality will be much more solid. 3. More choices of cross-modal generation models, including different models and the same model with different model scales (like OFA-base compared with OFA-huge, how much performance on augmentation will be affected?). 4. The cost of obtaining the augmented samples compared with previous (non-model-based) augment pipelines. (pointed in the limitation section for future research direction, but the clarification of current method is still needed) Technical Quality: 3 good Clarity: 3 good Questions for Authors: The overall top-1 accuracy score on Tiny ImageNet is low in the experiment. Honestly, I am not so familiar with this ImageNet variant benchmark. Is this a proper choice or a sanity experimental setting here? It would be very good if more discussion can be provided of selecting this as the benchmark. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors has pointed out the limitation of computation cost, which is reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer UnFZ, Thank you for your thoughtful review and constructive feedback on our submission. We appreciate your recognition of the method's potential and your careful assessment of its strengths. In response to the weaknesses and questions you highlighted, we would like to provide the following clarifications and details: 1. **Human Evaluation on Augmented Samples**: In response to your suggestion, we conduct a human evaluation on the augmented data. The evaluators consist of five crowdsourced workers, and the final scores are the average of them. The results from the evaluation were as follows: For the images generated using the back-captioning method: - Label Invariance Score: 99.2% For the sentences generated using the back-imagination method: - Semantic Consistency Score: 98.8% These high scores suggest that both methods have performed remarkably well in their respective evaluations. The results affirm that Back-Modality maintains the essential characteristics of the original data while adding diversity, thus further validating our approach. 1. **Robustness in Adversarial Settings**: We agree that the robustness of Back-Modality in adversarial settings or with harder testing samples would solidify our claims. In the revised version, we will include results on adversarial settings, providing further evidence of the effectiveness of our method. To assess the robustness of our model, we reported both the accuracy before and after the adversarial attacks, along with the absolute drop in accuracy due to the attacks. For image classification tasks, we employed the *Universal adversarial perturbations* technique. The results are as follows: - Base model: 11.75% to 5.23% (a decrease of 6.52%) - Random Erasing: 12.59% to 5.60% (a decrease of 6.99%) - Auto augment: 13.23% to 8.60% (a decrease of 4.63%) - Alignmixup: 14.34% to 7.88% (a decrease of 6.46%) - Puzzle Mix: 15.66% to 8.49% (a decrease of 7.17%) - Back-captioning: 20.07% to 14.02% (a decrease of 6.05%) From the results, it's evident that Back-captioning retains a leading accuracy post-attack, showcasing good robustness. In terms of absolute value drop, it stays in the middle ground among the evaluated methods. Furthermore, for textual data augmentation techniques, we evaluated robustness using various adversarial attack methods. For the textual entailment task, we utilized the bert-attack method. The results were: - Base model: 84.57% to 7.23% (a decrease of 77.34%) - Back-imagination: 89.14% to 16.76% (a decrease of 72.38%) For the SST-2 dataset, we employed the textbugger attack method: - Base model: 61.10% to 3.95% (a decrease of 57.15%) - Back-speech: 63.21% to 8.47% (a decrease of 54.74%) From these results, we can infer that our augmentation techniques, including Back-captioning, Back-imagination, and Back-speech, showcase a notable degree of resilience against adversarial attacks. 3.**More Choices of Cross-Modal Generation Models** In the paper, for the Back-captioning with a 10-shot setting, we primarily used the OFA-large model, which gave us a top-1 accuracy of 20.07%. Following your advice, we also experimented with OFA-huge, under the same conditions. The results showed a notable improvement, with the top-1 accuracy reaching 22.12%. 4.**Cost of Obtaining the Augmented Samples**: The table below reflects the additional computational overhead of various augmentation methods compared to the base model. | Method | Additional Computational Overhead (RTX A6000) | | --- | --- | | RandErasing | 4 m 55 s | | Puzzle Mix | 1 h 29 m 25 s | | Alignmixup | 1 h 59 m 45 s | | Back-captioning (our method) | 11 h 35 m | | Auto augment | About 49 h | The main cost of our method is in generating images with the diffusion model and the primary overhead of auto augment is in learning augmentation policies. For the method of text augmentation, on the textual entailment task, our method, back-imagination, took 4 hours 13 m 45 s, while the back-translation method took 5 h 38 m12 s. On the sentiment analysis task, back-speech took 35 m 27 s, whereas the back-translation method took 5 h 22 m 4 s. Question:**Overall Top-1 Accuracy Score on Tiny ImageNet** 1. **About Tiny ImageNet**: In the supplementary material, you'll find an in-depth introduction to Tiny ImageNet[1] from lines 13-16. For clarity, Tiny ImageNet is a compact version of the comprehensive ImageNet dataset. It comprises 100,000 images spanning 200 classes, with each class containing 500 training images, 50 validation images, and 50 test images. The resolution of these images is 64x64 pixels. By employing this dataset, we ensure that the evaluation process remains robust yet computationally manageable. 2. **Relevance in Recent Research**: It has been used extensively in various studies, ranging from data augmentation [2], to self-supervised learning [3], and open-set recognition [4], among others. Its widespread adoption underscores its significance and suitability as a benchmark for various experimental setups. 3. **Regarding the Accuracy Score**: We understand the concern regarding the "low" top-1 accuracy score on Tiny ImageNet. However, this was by design. The intent was to simulate a data-scarce scenario. Our primary aim was to demonstrate the efficacy of our method, while also ensuring a judicious use of computational resources. In light of the above, we firmly believe that Tiny ImageNet was an appropriate choice for our experiments. References: - [1] Tiny ImageNet Visual Recognition Challenge - [2] Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup - [3] OpenLDN: Learning to Discover Novel Classes for Open-World Semi-Supervised Learning - [4] Learning Placeholders for Open-Set Recognition. We appreciate your “Weak Accept” rating and the confidence you placed in your assessment. We believe that the modifications and clarifications proposed above will address your concerns and elevate the contribution of our paper. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing the detailed response with experiments. I will keep the rating as weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for recognizing and appreciating our response as well as the overall content of our paper. We will ensure to incorporate your constructive feedback into our revised version. Best wishes, [Authors of Submission7875]
Rebuttal 1: Rebuttal: Dear Reviewers, Firstly, I would like to express my profound gratitude for the time and effort you invested in reviewing our manuscript. We are deeply appreciative of the recognition most reviewers gave to our methods and experiments. Your constructive feedback is invaluable. From the collective feedback, we identified two primary and common concerns: 1. The suggestion that case study examples should be moved from the appendix to the main content. 2. The need for clearer explanations about experimental setup. Using the 10-shot, back-captioning approach as an example: - In the original tiny-imagnet dataset, there are 200 classes, with each class containing 500 data points. - After sub-sampling, the dataset consists of 200 classes, with 10 data points per class. - In our experiments with Back-captioning, where l=n=m=2, (see line 100-102) for every image, we generate 8 (2x2x2) augmentations, and subsequently, we uniformly sample 5 of these as the final augmented data. - With a 5-fold augmentation (as stated on lines 155-156, our default augmentation size is set at 5), this dataset expands to 200 classes, with each class now having 50 data points. The term "shot" denotes how many samples are drawn from each class during sub-sampling We have carefully considered and responded to the individual queries and points raised by each reviewer. Our detailed responses for each concern are outlined in our responses. Again, thank you for your constructive insights. We believe that with your feedback, our work has been significantly improved. Pdf: /pdf/1a943aae3141e9c40b8e40ff581ac4a73531e163.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposed a method to transform data between modalities so as to augment the data. Such a method makes it possible to leverage the multiple existing modalities to generate more useful data to train the model. Strengths: 1. The proposed method is modality-agnostic so that the initial modality can be transformed into any other modality and reverse back. 2. The generated data vary from the original data, introducing more variety in the training sources. 3. The performance on few-shot settings is largely improved. Weaknesses: 1. The paper's presentation could be improved. The tables and citations are not in good shape. 2. More case study is needed. In the current version, the case study section is not clear, which simply describes the improvement but there are not concrete cases. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WjzN, Thank you for the time and effort you put into reviewing our paper. We appreciate your comments and your recognition of the strengths of our proposed method. Here, we would like to address the weaknesses and questions that you raised in your review: 1. **Presentation**: We understand that you found the tables and citations not in good shape. We acknowledge this issue, and we will revise the paper to enhance the layout and formatting. For example, we will standardize the font sizes across all tables to enhance the visual coherence and readability of the paper. By organizing the tables and citations meticulously, we will make sure to provide a clearer and more accessible presentation. Additionally, if there are other particular formatting or stylistic concerns, such as alignment, captioning, or citation style, please do point them out. We are committed to adhering to the preferred presentation guidelines, and your precise guidance will enable us to address these aspects meticulously. 2. **More Case Study Needed**: We value your suggestion to provide more concrete cases in our paper, and we would like to clarify the following two actions we will take to address this concern: a. **Moving Content from Appendix**: In fact, we have provided more extensive case studies, along with detailed analysis of back-imagination and back-speech augmentations, in the appendix of the current submission. Recognizing the importance of these details in the main text, we will relocate this content from the appendix to the relevant section in the paper. This move will ensure that readers have direct access to the comprehensive case studies without the need to refer to supplementary material. b. **Additional PDF for More Case Studies**: In response to the request for a more exhaustive exploration, we will create a PDF dedicated to additional case studies. This material will further illustrate the applications and potential benefits of our method. By implementing these measures, we believe we can provide a more comprehensive and accessible understanding of our method's functionality and significance. We are committed to offering a thorough analysis and are confident that these adjustments will align with your expectations. 3. **Rating**: While your rating indicates a borderline reject, we believe that addressing the above weaknesses will enhance the paper's quality and contribute to the overall field. Since the soundness and contribution have been rated as good, we kindly request reconsideration of the rating after we implement the proposed revisions. 4. **Other Comments**: If there are additional specific concerns or suggestions not covered in the review, please do share them with us. We are committed to making all necessary improvements to ensure the paper meets the standards of the conference. In conclusion, we believe that the concerns you raised can be addressed through targeted revisions. Thank you once again for your constructive feedback, and we look forward to hearing from you soon. Best regards, [Author(s) of Submission7875] --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: Thanks for your clarification. I am happy to increase the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are delighted to hear that our responses are able to address your concerns. We deeply appreciate your willingness to re-evaluate and recognize the merit of our work. Please be assured that we will integrate your valuable feedback into our revised version. Best wishes, [Authors of Submission7875]
null
null
null
null
null
null
Group Fairness in Peer Review
Accept (spotlight)
Summary: The paper describes a new algorithm for assigning papers to peer reviewers with the goal of ensuring that review assignments are in the core. Thie motivation behind this is that large conferences benefit from joining many different communities of research and thereby enabling more interdisciplinary cooperation. If review assignments are in the core then subgroups will have no incentive to break off and form smaller, more siloed conferences. An algorithm is described that, under certain assumptions, creates assignments between papers and authors that are within the core and claims to run in polynomial time. This algorithm is compared with two commonly used algorithms on a number of metrics. While CoBRA, the new algorithm, does not provide as much utility as the TPMS and PR4A algorithms it is shown that the other algorithms almost always produce outcomes that are not in the core. The paper concludes by suggesting that some middle ground between these approaches may be a useful path forward. Strengths: While the problem described by the paper has received a fair bit of research the application of the core appears to be a novel approach to the review assignment problem. The problem is not of critical importance but is one that has room for improvement and can relatively easily benefit from algorithmic enhancements. Overall the paper provides clear high-level descriptions of each step and the motivation behind why the authors believe the core is a useful idea for review assignment. The authors are also clear about the limitations of their work and where it does and does not improve upon the status quo. The experiment section was a useful addition. Despite the results not dominating other algorithms I am glad there is some evaluation of CoBRA. Weaknesses: While the high level components of the paper are explained quite clearly I found the specific details in need of clarification. A number of grammatical issues exist (often across longer sentences; some issues on lines 8, 209, 226-229, 232, 256) Attempting to reformulate some technical ideas and write with simple, shorter sentences might ease reader comprehension. Your motivation is clearly described but I am not wholly convinced that smaller, more focused venues are generally worse. While they may be less naturally suited for interdisciplinary work they also (intuitively, to me) increase the chance that reviewers and researchers come across work more connected to the areas they are working in. While you claim there are only mild assumptions of order separability and consistency there appear to be a number of modeling assumptions that may not be realistic. You say that each author must serve as a reviewer, which (as you do mention) is not possible to guarantee. You also say that CoBRA "takes as input only the preference ranking of each author over individual potential reviewers" (line 58/59). Perhaps this is common phrasing in for reviewer assignment work but *as phrased* this seems like an extremely strong requirement. Being more clear that you probably mean something like a similarity score would be an improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can you clarify what the X/Y matching is that you are referring to in the Datasets section? Does this indicate that you are giving as input to the algorithm only X out of Y papers from a conference? How large do the deviating groups for TPMS and PR4A tend to be? If they are extremely tiny or nearly the entire group I would expect that to be a less meaningful deviation than if they are some distinct subfield of research within the conference. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have done a very good job of discussing their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your useful feedback. We will incorporate all your suggestions in our revision. Please see our common answer to all reviewers with regards to your comment about the benefits of focused venues (which we fully agree with). As for the inputs, we meant to say that we do not *need* numerical similarity scores, only ordinal comparisons (which can always be induced if you have access to similarity scores). But we’ll rephrase this for clarity. ### **Q1: X/Y matching** Yes, this is what we mean. We (like Xu et al. [25] and Dhull et al. [26]) are forced to do this because the datasets are anonymized whereas we need to connect the author set to the reviewer set. Thus, we use the conflict information to deduce authorship as best as we can, which is not a perfect process. ### **Q2: Size of deviating groups** We agree that the size of the deviating groups is an interesting measure, which we did not consider. In the table below, you can see the *maximum size* of a successfully deviating coalition, averaged across 100 runs, together with the standard error. Recall that each run is a subsampled dataset of size 100, so these can be interpreted as percentages. It seems that under both TPMS and PR4A across all three datasets, the largest deviating communities are 6-15% of the conference size, which we believe can indeed reflect the sizes of some of the largest subcommunities at CVPR and ICLR. (Note that these are the largest deviations and there are smaller deviations too.) We will be happy to include this in our revision. | Dataset | TPMS | PR4A | |----------|-----------|-----------| | CVPR '17 | 6.64±0.77 | 7.5±0.77 | | CVPR '18 | 10.54±1.29| 11.49±1.49| | ICLR '18 | 11.25±1.76| 15.01±1.76| --- Rebuttal Comment 1.1: Comment: Thank you for the response to my questions. I do agree that groups of 6-15% of the conference size (or even slightly smaller) can certainly represent coherent subcommunities. I believe some mention of this would be a useful addition to the paper. While the paper is compelling I will leave my review as is based on the limitations that CoBRA must overcome before seeing practical usage.
Summary: The authors consider a problem of finding reviewer assignments for conference peer review. Specifically, they aim to find a valid reviewer assignment subject to the constraint that no group of authors can achieve a (strictly) preferred reviewer assignment among themselves and a subset of their authored papers. This can be seen as both a fairness constraint and an incentive-compatibility constraint. The authors provide an algorithm to find such a reviewer assignment and theoretically prove that the returned assignment satisfies the desired properties. They then empirically compare the proposed algorithm to several baselines on subsampled real conference data, and show that it achieves reasonable total welfare while other baselines often violate the core constraint. Strengths: - The problem of finding a paper assignment in the core within the peer review context is interesting and new (to my knowledge). I think the core constraint is well-motivated as a group fairness constraint and is substantively different from the other notions of fairness considered in prior work. - The writing is generally very clear and the paper is easy to read. - The author preference model used by the paper is significantly more general than the standard additive utility assumed in other works. - The empirical results that standard TPMS assignments have significant numbers of core violations on real conference datasets are of practical interest. Weaknesses: - The setting assumed by the paper is highly simplified compared to the standard peer review setting: all papers are authored by a single author, and no conflicts-of-interest other than authorship are considered. This hinders the practical relevance of the algorithm. - While the proposed algorithm is claimed to be polynomial-time (Theorem 1), this claim does not seem to be proved and no analysis of the time complexity is provided. - The empirical results are constructed by subsampling small sets of reviewers and papers. While this is done for computational reasons when evaluating core violations, I’m not sure why the social welfare results were also approximated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Could the authors clarify the time complexity of the algorithm? - I assume that the USW results in Table 1 are average assigned similarity, not total similarity as defined. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors clearly stated the limitations of their algorithm, noting that their contribution was primarily conceptual and not practical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your useful feedback. ### **Regarding subsampling** Indeed, we subsampled sets of reviewers and papers throughout our experiments for consistency (as you noted, this was required for evaluating core violations). If you are interested, the results for USW and ESW without any subsampling are in the table below. It is comforting to note that the qualitative relationships between the different algorithms according to each metric remains the same. It is worth noting that USW goes up across the board while ESW goes down for CoBRA and TPMS but up for PR4A in this case as opposed to the subsampling case from the submission. Note that theoretically, each of USW and ESW can go either up or down when we consider a superset of the data. We would be happy to add this table to the appendix in our revision, if you wish us to. | Dataset | Algo | USW | ESW | |---------|---------|-------|-------| | | CoBRA | 1.644 | 0.000 | | CVPR'17 | TPMS | 1.970 | 0.000 | | | PR4A | 1.919 | 0.384 | |---------|---------|-------|-------| | | CoBRA | 1.208 | 0.000 | | CVPR'18 | TPMS | 1.586 | 0.004 | | | PR4A | 1.560 | 0.731 | |---------|---------|-------|-------| | | CoBRA | 0.251 | 0.015 | | ICLR'18 | TPMS | 0.284 | 0.038 | | | PR4A | 0.278 | 0.086 | ### **Q1: Time complexity** Please see our common response to all the reviewers for the time complexity. ### **Q2: USW** Yes, we apologize; these are averaged assigned similarities. We used average rather than sum to report USW and ESW on the same scale for better comparison. We will revise the text accordingly. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your thorough response and clarifications. I do think it's useful to include the welfare results on the full dataset, or at least to acknowledge that the ranking of the algorithms is not affected by the subsampling. After reading the other reviews and the author response, I will revise my score from a 6 to a 7: despite the practical limitations of the work (most importantly the single-author assumption), I see the conceptual contribution as interesting and valuable. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read through our response and revising your score. We will do both: acknowledge the fact that the ranking of the algorithms is not affected by subsampling in the main text and include the welfare results for the full dataset in the appendix.
Summary: The authors propose to use core concept in the context of peer review system. Potentially decreasing an overall welfare, the new paradigm offers a fairer treatment of small sub-communities, erasing an incentive to create an independent venue. The paper provides an algorithm for assigning the reviewers on a restrictive case of single author submission. Experimental support their claims. Strengths: The authors are trying to address a crucial problem in the modern era ML community---how to assign papers fairly and keep more communities happy. Overall I enjoyed reading the paper, found the notion of core interesting and the proposed methodology sound. Weaknesses: The authors are pretty up-front with listing the weaknesses of the work. The major one, of course, being the single-author case. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It would be nice if the authors could write a more general model and definition (not only single author) for future references, while then saying that for now they only treat a particular case as a proof of concept. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Section 5 on limitations is pretty clearly written and the list seems exhaustive. Having purely technical background, I do not feel qualified to comment on the main premise of the work: make smaller communities stay within gigantic conferences. At least in TCS and theoretical ML there are some great smaller conferences (SoCG, SOSA, COLT, ALT, FaccT to name a few). I am not sure that these communities would benefit from being incentivised to stay within say NeurIPS. In particular, it seems to me that fairness and accountability community has greatly benefited from the creation of FaccT (the most recent example I know). So, it would be nice to hear an opinion from people that submit regularly to smaller conferences. That being said, I am positive about this submission as it tries to tackle a rather complicated, ill posed and timely problem. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your useful feedback. As per your suggestion, we will define the core more generally in our revision for the benefit of future work, especially given that it is indeed a concept that applies quite broadly. Please also see our common response to all the reviewers regarding your comment on the advantages of having small conferences.
Summary: This paper proposed an approach that lets the assignment of the peer review model satisfy the core, a fairness requirement over groups of authors, so as to prevent small research communities from having the incentive to deviate and set up their own separate conferences. Through theoretical analysis, the authors found that the proposed method, CoBRA, would return a valid assignment in the core if agent preferences are order separable and consistent. Experimental results show that the proposed CoBRA can produce much fair assignments compared to the baseline approaches. Strengths: 1. This paper addressed an important problem, ensuring group fairness in the peer review process. 2. The proposed CoBRA is theoretically and technically sound. 3. As a theoretical paper, it is not difficult to follow. Several examples and cases are given for the audience to better understand the theoretical analysis as well as the proposed models. Weaknesses: 1. The necessity of achieving fairness in peer review has not yet been well motivated. It is unclear why achieving group fairness will result in the improvement of satisfaction for various communities. 2. The proof of Theorem 1 is not complete. Lemma 1 only validates CoBRA can return the valid assignment in the core but the theoretical analysis on the time complexity of CoBRA to generate the assignment is not complete. 3. The optimality of the valid assignment is not proven. 4. The flexibility of putting the CoBRA in practical use has not been fully discussed. I suggest the authors discuss how CoBRA could deal with the imbalance submission when putting in practical use. 5. It is not easy for non-expert audience to understand this paper, especially why satisfying core is good enough for the review assignments. Some minor suggestions: 1. The proposed methods can be extended to scheduling tasks in many other application scenarios such as task scheduling in collaborative edge computing, federated learning, etc. Merely producing a fair per-review assignment may limit the contributions of the CoBRA. 2. The technical challenges can move to the introduction sections. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How optimal is CoBRA's assignment? 2. How does each author's paper submission amounts affect the fair peer review assignment made by CoBRA? Some areas will receive a huge number of submissions but the other areas may not. The number of submissions by different authors also varied a lot. How does CoBRA address such data imbalance issues and guarantee group fairness? 3. How to determine groups in the experiment? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations have been adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your review. Please see our common answer to all reviewers for motivations behind the core as well as the exact running time of CoBRA. ### **Q1 & Q3 (and related comments)** > How optimal is CoBRA's assignment? > The optimality of the valid assignment is not proven. > How to determine groups in the experiment? We believe that these comments may have stemmed from a fundamental misunderstanding regarding our objective, the core. At its heart, the core is a *qualitative* notion of fairness; a reviewing assignment is either *in the core* or *not in the core*. There is no notion of “optimality”. One can say that approximation of the core turns this into a quantitative objective, but we prove that our algorithm finds an assignment *in the core* (i.e., achieves the best possible 1-approximation of the core). Thus, there is no optimality to be proven. Also, the main benefit of the core is that it simultaneously provides a fairness guarantee for *every possible group* (that is, all $2^n-1$ non-empty subset of agents). As such, unlike most definitions of fairness considered in the machine learning literature, where one must specify groups in advance and fairness can only be achieved with respect to those groups, finding a reviewing assignment in the core achieves fairness with respect to all possible groups simultaneously. This also eliminates the need for determining groups in the experiments. ### **Q2** > Some areas will receive a huge number of submissions but the other areas may not. The number of submissions by different authors also varied a lot. How does CoBRA address such data imbalance issues and guarantee group fairness? This is again addressed by the definition of the core, which the assignment returned by CoBRA satisfies. The fairness guarantee offered by the core to every group depends on what the group can achieve on its own. As such, a group that consists of very few people but generates a large number of submissions may not be offered a strong guarantee because such a group, on its own, would not be able to generate high-quality reviews for so many submissions of its own. On the other hand, a group which produces only as many submissions as it can review well on its own and has the internal expertise to produce high-quality reviews for these submissions will find that CoBRA’s assignment treats it quite well, no worse than the satisfactory reviewing outcome it can produce on its own. We find that this is a natural way in which the core handles imbalances between individuals or research areas without relying on any free parameters (deciding which can become quite controversial in practice). --- Rebuttal Comment 1.1: Comment: 1. CoBRA can only produce assignments in the core. Are those assignments good enough for practical use? I don't think the authors discuss this point. It is not easy for the nonexpert audience to understand why satisfying the core is enough for making the assignment. 2. The authors claim that "a group that consists of very few people but generates a large number of submissions may not be offered a strong guarantee because such a group, on its own, would not be able to generate high-quality reviews for so many submissions of its own". However, this is the situation that a conference may face and cannot be handled by CoBRA because "the fairness guarantee offered by the core to every group depends on what the group can achieve on its own". --- Reply to Comment 1.1.1: Comment: Thank you for putting in further thought and effort in our paper. We appreciate your comments. Our view on these issues is as follows. **Regarding comment 1:** Being "good for practical use" has two parts. Is it necessary in practice? Is it sufficient in practice? For the first part (necessity), we provide motivations for needing the core in practice in the submission, and elaborate on it further in our common response. For the second part (sufficiency), we believe that one should look towards the algorithm, not the core, which is only a minimum requirement. An algorithm can have aspects other than satisfying the core which can lead to reasonable assignments in practice. This is why we conducted experiments with real data to test CoBRA against state-of-the-art algorithms (TPMS and PR4A) on core and social welfare metrics. We find that CoBRA provides an interesting tradeoff: while CoBRA suffers from a small but not insignificant welfare loss, TPMS and PR4A incentivize realistic deviations (see also our response to Reviewer wnxf regarding the sizes of such coalitions), which CoBRA prevents. In that sense, if TPMS/PR4A are on one end of a spectrum which optimizes welfare, CoBRA is on the other end which focuses on fairness. As the very first paper on the subject, we certainly do not claim that CoBRA is ready for practical deployment. We hope that future work can build on ours to design better algorithms that strike a balance between the two extremes to find assignments with better welfare while preventing most, if not all, realistic deviations. We hope that answers your question. **Regarding comment 2:** We fully agree that a conference may include a community that is small but generates a disproportionately large number of submissions. To be clear, we are not saying that CoBRA will necessarily treat such a community poorly, only that it cannot *guarantee* good treatment to such a community even in the worst case regardless of the problem instance. But no algorithm can do so because, for example, the problem instance may consist solely of such communities and no assignment treating them all well may be feasible (due to the shortage of reviewers). Once again, we hope that future work can build on ours to find instance-dependent bounds, where better guarantees for such communities with reviewing deficits can be provided in cases where other communities exist which contribute reviewing surpluses.
Rebuttal 1: Rebuttal: We thank all the reviewers for their effort and for providing helpful reviews. We will be happy to incorporate all the suggestions of the reviewers as explained in more detail in the individual responses. Let us address two comments raised by multiple reviewers in this common response. ### **Motivation behind the core in peer review** There are three key motivations for studying the core in the peer review setting. 1) **Fairness:** The core acts as a notion of group fairness, which provides a guarantee that every possible group (even a community that doesn’t yet have a well-established identity) will be treated well relative to what the group can achieve on its own, which is a function of the reviewing burden imposed by the group versus the reviewing capacity contributed by the group. 2) **Stability:** The core also acts as a notion of stability because it ensures that no group would have an incentive to break off due to a feeling of being mistreated by the large conference (such as NeurIPS). Note that communities may still prefer to set up their specialized conferences for very good reasons including those outlined by the reviewers, but we believe that receiving low-quality reviews should not be one of them. This is precisely what the core aims to ensure. We also remark that while specialized conferences have their advantages, so do large conferences. For example, they can find the diverse reviewing expertise needed for emerging multidisciplinary areas and provide a venue for interdisciplinary dialogues to take place. Thus, it is useful to retain at least some large conferences alongside specialized conferences, but this would be difficult if various communities keep breaking off due to not receiving high-quality reviews. In this sense, we don’t see the core as creating roadblocks to the creation of specialized conferences, but rather as a way of mitigating harm imposed on communities in large conferences, thereby maintaining their existence alongside more specialized venues. 3) **Robustness:** A key benefit of the core is that it is a robust definition of fairness that does not require specifying groups in advance, unlike most definitions studied in the machine learning literature. Instead, it simultaneously provides a fairness guarantee for *every possible group* (that is, all $2^n-1$ non-empty subset of agents); this includes groups defined based on sensitive attributes, intersectional groups, or even groups that do not yet have a well-established identity. Further, the guarantee scales in a principled manner, without having to set any controversial parameter values, as a function of what each group can achieve on its own, as mentioned above. We will clarify these more explicitly in our revision and revise language in our paper which, unfortunately, seems to incorrectly suggest that “preventing” small communities from deviating is our intention (it is not!). ### **Running time of CoBRA** We apologize for not including a running time analysis of CoBRA. Since we show that the number of assigned reviews grows monotonically (with each increase evidently requiring polynomially many steps), we believed it was immediate that the algorithm runs in polynomial time, which is all that Theorem 1 claims. Like many theoretical works, we were not overly concerned with the exact time complexity. However, we see that it is useful to not only lay out the polynomial time argument, but to also identify the exact complexity so future works can perhaps improve on it. We do so here and will include this in our revision. First, let us consider the time complexity of PRA-TTC. In each iteration, the algorithm assigns at least one extra reviewer to at least one incompletely-assigned submission. This can continue for at most $m \cdot k_p \leq n*k_a$ iterations, where the inequality follows from our condition for ensuring the existence of a feasible reviewing assignment. In each iteration, it takes $O(n)$ time to find and eliminate a cycle in the preference graph and $O(n^2)$ time to update the preference graph (for an arbitrarily-picked incompletely-assigned submission of each agent, we need to find the most qualified reviewer who can be additionally assigned to it). Thus, in total, the runtime of PRA-TTC is $O(n^3)$ because $k_a$ is a small constant in practice. After PRA-TTC terminates, CoBRA calls the Filling-Gaps algorithm. However, Lemma 3 ensures that at the end of PRA-TTC, $|L \cup U| \le k_p+1$, which is a small constant. And Filling-Gaps only makes local changes that affect these constantly many agents. As such, the running time of Filling-Gaps is constant as well. Overall, the time complexity of CoBRA is $O(n^3)$. For the average-case runtime, we have attached a PDF to the common response, which shows the running time of CoBRA as a function of the number of submissions in the conference. Results are obtained by subsampling submissions from the three datasets using the same process as in our experiments section, and average runtime over 25 runs is shown together with the standard error. Across all datasets, CoBRA runs in less than half a minute even with 800 submissions. We will be happy to include this figure in the appendix in our revision. Pdf: /pdf/fcb646cb1804ce7fdad339cd29f37520371a793f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors frame their investigation into group fairness in the peer review setting in the context of large conferences (i.e. NeurIPS, AAAI) by considering a simplified peer review model that enforces the existence of a valid reviewer assignment. Within this framework, the authors apply the fairness notion of "the core" to this setting and present an efficient graph-based algorithm that always returns a valid assignment in the core under minor conditions based on author preferences. The authors empirically validate their method using real data from CVPR and ICLR, and evaluate the cost of utilizing this algorithm (in terms of lost utilitarian and egalitarian welfare) in order to satisfy the fairness notion of "the core" and prevent the incentive for any community to establish their own separate smaller conference (which can lead to research topic insularity and harm interdisciplinary research areas). Strengths: - While there is a good bit of notation in the model presented, it is kept fairly simple and is clearly described/understandable (to me). This is significant because the paper's primary contributions are methodological and theoretical. - The authors do well in presenting mathematical definitions will more intuitive descriptions, lowering the cognitive barrier on the reader. - The proof provided for main theoretical finding (Theorem 1, along with subsequent lemmas and prop) is very rigorous, and thorough. - Toy example illustrating the execution of the algorithm introduced by the authors (CoBRA) is helpful for building intuition of the method (adding a graph-based visualization of the two companion graphs would be a bonus). - The empirical results and interpretation are well-described in words and may benefit from modified format to enabled more direct comparison between methods and highlight potential performance trade-offs (please refer to suggestion in "Weaknesses" section on elaboration). - The authors humbly acknowledge the limitations of their work before it is ready to be used in practice and highlight some potential follow-up directions for investigation. - Overall comment: much of the content is clearly described, but the presentation structure adds cognitive overhead on the reader and makes it more challenging to understand and utilize the provided information. There are clear strengths in this category, but also real room for improvement. [I struggled to determine the rating in this category, perceived to be between a 2-3 overall. Please refer to "Weaknesses" and "Questions" for more details.] Weaknesses: Primarily, feedback is around presentation and experimental results/details. Some of these are more significant than others. Overall, the paper is fairly well-written but appears very cramped; the structure and presentation could be improved to support understandability and readability. - Additional details on empirical setup and experiments is necessary to support reproducibility. (Note that there is a line in the Appendix outlining the computation resources and some core code files are included in the supplementary.) - Aligned with the comment under "Strengths", there would be benefit in modifying/better aligning the quanititative results with the discussion/explanation (in Section 4 -> Results). The content is good, but presentation is a little hard for me to follow and has room for improvement (perhaps by highlighting best performance for each measure/column in Table 1?). When page limit is not as tight, it'd be helpful to reorganize Section 4 into subsections (and similarly in Section 4). - It's worth noting that _what_ the authors present is fairly clear, but a clearer motivation of the open problem they are working to address is lacking. (Follow-up question in "Questions" section.) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. You mention that theoretically, the CoBRA algorithm is efficient (polynomial time). In terms of actual compute time, how does this scale as a function of the dataset size? Can you provide estimates for the wallclock compute time for the empirical analysis? 2. Why is the introduction of the notion of the core meaningful for the peer reviewer assignment problem space? The Related Work included highlights some relevant work in this area, but does not (from my perspective) very clearly or succinctly motivate the open problem you're addressing in a compelling way. This makes it challenging to determine the potential impact of this work. 3. [A "nice to have" suggestion around presentation] While _technically_ the mathematical definitions of measures computed in empirical evaluation are included in Section 4, they are buried within the main text body; these may be easily missed by the reader or make it harder to understand the paper (it appears that main paper real estate was scarce). I suggest that the authors make room in the main paper to describe the measures used mathematically and verbally (at least prior to publishing) and/or expand upon these along with additional rationale/significance in the Appendix. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Practical limitations are well outlined in the final section of the paper, along with directions for follow-up work to build upon this theory-forward paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for providing useful recommendations for improving the readability of our paper. We will incorporate them in our revision. ### **Q1: Time complexity** The worst-case time complexity of CoBRA is $O(n^3)$ and we have provided the average-case runtime in the PDF attached to the common response. Please see our common answer to all the reviewers for more details. ### **Q2: Motivation behind the core in peer review** Please see our common answer to all reviewers.
null
null
null
null
null
null
Implicit Manifold Gaussian Process Regression
Accept (poster)
Summary: The curse of dimensionality demonstrates in Gaussian process models in that they’re default kernel choices that often depend on the Euclidean distance between 2 points. Euclidean distance is a poor metric especially in high-dimensional settings, where we expect data points to lie on an, often unknown, manifold. This paper combines findings from multiple precursor works in assembling a tool set to for Gaussian process regression with a Metern kernel, while implicitly inferring the underlying manifold structure. Strengths: I believe that while the paper mostly combines existing theoretical results in manifold learning and numerical analysis, it presents a compelling, novel framework of tackling manifold learning with Gaussian process regression. The paper is largely self-sufficient: it tackles the complex problem of learning manifolds in the context of Gaussian process by proposing solutions to all facets of the problem: characterizing manifolds with graph Laplacian, gradient-based learning and scalability. Weaknesses: I think the paper lacks a presentation of _how_ learning the manifold structure aids in the predictive performance of the rotated MNIST dataset. While one can clearly see _why_ the learning of an implicit manifold helps in predicting rotation angles for handwritten digit, as rotation expressly traverses around a nontrivial manifold, the benefits of this work go beyond simple improvements in prediction, but interpretability. While the authors did a very good job in demonstrating how the model correctly inferred the dumbbell-shaped manifold, I believe similar illustrations can also be done in the MNIST rotations. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - I believe the presentation of the paper could be improved by a clear separation of predecessor works and original contribution: for example, Section 3.1 is almost solely composed of existing work except for the theoretical contribution of the Matern kernel convergence — an overall marginal contribution in the paper. - Table 3: “RMSE” in the table seems confusing for prediction of rotating angles. Is it so that the author is referring to the shortest arc as a metric for distance in rotations? - I believe that the paper could benefit from a illustration of what implicit manifold is learned from the rotated MNIST data: for example, a display of what the kernel values matrix looks like along the trajectory of a rotation compared to the Euclidean kernel matrix, or something else showing the geodesics of the points on the rotation trajectory. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"I believe the presentation of the paper could be improved by a clear separation of predecessor works and original contribution: for example, Section 3.1 is almost solely composed of existing work except for the theoretical contribution of the Matern kernel convergence — an overall marginal contribution in the paper."* * Thank you for mentioning this point. In light of this and some other comments we intend to revisit the presentation in Sections 3 and 4 to make them better structured, emphasize specific algorithm steps and their order, as well as to distinguish our own contributions there from the previous work. We believe it will be fairly easy to do this and this will not require major changes. "*Table 3: “RMSE” in the table seems confusing for prediction of rotating angles. Is it so that the author is referring to the shortest arc as a metric for distance in rotations.*" * This is a good catch! We actually do use RMSE here although arc length is more appropriate. This, however, does not affect the results because the angle of rotation in the dataset was limited to $\pm 45$ degrees. "*I believe that the paper could benefit from a illustration of what implicit manifold is learned from the rotated MNIST data: for example, a display of what the kernel values matrix looks like along the trajectory of a rotation compared to the Euclidean kernel matrix, or something else showing the geodesics of the points on the rotation trajectory.*" * Thank you for suggesting the ideas for visualizing the learned manifold. Since our method uses the eigenfunctions of the Laplacian as features, it is closely related to the *Laplacian eigenmaps* and the *diffusion maps* dimensionality reduction techniques. Using one of these techniques is most likely the best way of visualizing the manifold we are learning. Actually, a manifold of rotated images is visualized right in the original paper on diffusion maps: see Figure 2 in Coifman et al. (2006). Note though that such a visualization only relies on the first two eigenvectors of the Laplacian---the number of vectors determines the dimension of the "picture" and thus should be no larger than 3---and our kernel uses much more of them. We will mention this in the paper. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I thank the author for their detailed response, and I maintain the same score assessment but slightly decrease my confidence level as I realize my lack of expertise in manifold learning algorithms such as diffusion maps. I mainly have 1 question regarding the rebuttal PDF: the author added a 3d manifold example in the figure, but I did not see an explanation (it's possible that I missed it in your responses to other reviewers). Could you please clarify what message you meant to convey by this figure? --- Reply to Comment 1.1.1: Comment: Sure. The figures are related to the second point raised by reviewer 2iqe "As the methodology applies to general manifolds, I would expect to see more empirical results. For example, some synthetic 3d surfaces..." The figures show the results in the semi-supervised learning scenario on a complex 3D surface. For the final version of the paper, we chose a 2D example which allows to visualize the model's extension to the ambient space, as depicted in Fig.1 or Fig.3. The visualization of noise presence within manifold samples (Fig. 4) was notably clearer in 2D than in 3D, as well. Nevertheless we agree with reviewer 2iqe that a more complicated 3d case might increase further the understanding of the algorithm's features. For instance notice how the posterior standard deviation smoothly decays in proportion to the geodesic distance rather than following the ambient Euclidean distance. We plan to add these results in the appendix.
Summary: Authors propose a novel methodology for doing GP regression which is able to learn the implicit structure from the data. This is particularly useful in high-dimensional problems, where the data lies on low-dimensional manifold. The proposed methods allows to learn the implicit manifold in a fully differentiable way, using nearest neighbour graph. The method is based on Matern GPs on manifolds and graphs (Borovitskiy, 2020, 2021). To approximate the eigenvalues of the Laplace-Beltrami operator on the implicit data manifold, authors use random walk normalized graph Laplacian (on KNN graph) weighted to overcome possible non-uniform sampling density. This work leverages efficient approximation to the KNN, sparse structure of the precision matrices, and RFF kernel approximation to be able to scale to large datasets. The model is tested on a synthetic dumbbell-shape manifold and on rotated MNIST data. Strengths: This is, to the best of my knowledge, a novel methodology for GP regression. This work is build on solid foundation and the method is derived naturally from Matern GPs on manifolds and graphs. Authors cleverly use a variety of techniques to keep good scalability. In my opinion, the main contribution of this work is in the elegant assembly of the right methods, which gives a practically usable model for a well-known problem. The manuscript is written clearly and is easy to follow. Weaknesses: Due to the incremental nature of the paper, it is not always clear what is a novel contribution of this work and what is a background from previous works (example, last paragraph of 2.1). The experiments are somewhat limited. The method is tested on one synthetic dataset and a few versions of rotated MNIST. In the MNIST dataset, however, the added rotation must be inducing a lot of structure. It would be beneficial to showcase the method on a real dateset where an implicit manifold is already existing in the data. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Could you please clarify on the following: There are no results on spectral convergence for KNN graphs independent of data sampling density. You use a weighing scheme to deal with non-uniform data sampling. You do not claim that it implies spectral convergence. How much of a problem is it, both theoretically and practically? Suggestions: 1. In figure 2, perhaps use different colours foe kernel and sample plots. 2. eq(11) I believe $\lambda_i$ should be $\lambda_l$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"it is not always clear what is a novel contribution of this work and what is a background from previous works (example, last paragraph of 2.1)."* * Thank you for mentioning this point. In light of this and some other comments we intend to revisit the presentation in Sections 3 and 4 to make them better structured, emphasize specific algorithm steps and their order, as well as to distinguish our own contributions there from the previous work. We believe it will be fairly easy to do this and this will not require major changes. "*The experiments are somewhat limited. The method is tested on one synthetic dataset and a few versions of rotated MNIST. In the MNIST dataset, however, the added rotation must be inducing a lot of structure. It would be beneficial to showcase the method on a real dateset where an implicit manifold is already existing in the data.*" * In addressing the high-dimensional context, we employed the rotated MNIST dataset where we have good reasons to believe that the manifold hypothesis holds. Other high-dimensional datasets may lack an explicit enough underlying manifold structure. The attached PDF (see general response) contains a table presenting preliminary results in the supervised scenario for a random dataset from the UCI Machine Learning Repository. RMSE performance with IMGP surpassed that of EGP for larger sample sizes. However, NLL exhibited a less favorable trend compared to our observations for rotated MNIST. This warrants further investigation. It is possible that the chosen dataset, despite the high-dimensional feature space, does not inherently embody a manifold structure or such a structure could be highly irregular. This is supported by the need for a substantial number of eigenpairs in achieving satisfactory outcomes in contrast to rotated MNIST i.e. slow spectrum decay. We will discuss this example or a similar additional example in the paper as well as an example for a manifold in 3D---the results for the latter were actually excluded from the draft to save space. We think it might make sense to apply the technique in the contexts like modeling 3d structures of molecules where there is inherent symmetry (invariance or equivariance to rotations and translations), as an interesting direction for further work. *"Could you please clarify on the following: There are no results on spectral convergence for KNN graphs independent of data sampling density. You use a weighing scheme to deal with non-uniform data sampling. You do not claim that it implies spectral convergence. How much of a problem is it, both theoretically and practically?"* * We hypothesize that convergence should still hold true. There is apparently no proof of this fact in the literature though. It does not seem to be out of reach for the proof techniques used for similar problems, but such a proof would likely turn out to be long and heavy due to multiple "moving parts". * Practically, we do not expect this to be a problem either. Although we use convergence results to motivate the technique, the graph construction itself reflects the geometry of a point cloud in a rather intuitive way. What is more, it easy to imagine that the graph is capable of representing structures beyond manifolds in the rigorous mathematical sense, e.g. manifolds of different dimensions glued together (like presented on Figures 2--4 in Dunson et al. (2022). *"In figure 2, perhaps use different colours foe kernel and sample plots."* * Thank you, we agree this should improve clarity. We will make this change for the camera ready version. *"eq(11) I believe should be ?"* * This is an actual typo. Thank you very much for reporting this!
Summary: The authors propose a methodology to extend Matern processes on implicit manifolds, which are modeled by $K$-NN graphs, and the kernel relies on the set of eigenvalues/vectors of the associated graph-Laplacian. An approximation of the eigenfunctions based on the eigenvectors is provided, which together with an additional trick allows to extend the process in the vicinity of the implicit manifold. The hyperparameters are learned in supervised and semi-supervised scenarios via a differentiable objective, and the performance is demonstrated in the experiments. Strengths: - The technical part of the paper seems correct and sensible, but I have not checked the derivations in details. - The idea is a direct and relatively simple extension of previous work. - The results mainly due to the synthetic experiment seem convincing. - The paper in general is well-written. Weaknesses: - The main drawback of graph-based methods is typically the construction of the graph and how well this recovers the actual structure of the underlying manifold. I suppose that the current method has the same limitation that is not clearly discussed/analyzed in the paper. - As the methodology applies to general manifolds, I would expect to see more empirical results. For example, some synthetic 3d surfaces and more complicated high-dimensional real-world datasets as the rotation of MNIST is closer to a synthetic experiment. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Q1. Lines 98-100: Is there any reference that plugging in Eq. 3 the graph Laplacian and the Gaussian noise gives a Matern process on the graph? Q2. Why the dimension of the implicit manifold (graph) is 0? I suppose that the underlying manifold has some dimensionality. I think that there are approaches relying on graphs that try to estimate this dimensionality. Q3. How sensitive is the method with respect to the graph construction? In particular, how the number of neighbors $K$ influences the result? Can this be estimated somehow e.g. by cross-validation? Q4. How the method behaves if there are more than one connected components in the dataset? Q5. Lines 144-150: The graph Laplacian converges to the Laplace-Beltrami (in terms of eigenvalues/vectors/functions) for $K$-NN graphs as long as the data distribution is uniform on the manifold? While for non-uniform distributions such a convergence result does not exist? Q6. Lines 177-179: I think that here the gist is unclear. Q7. Which Nyström method is used? Q8. Fig 3. I think it is interesting to see the actual eigenvector on the data (e.g. color the points) and how this correlates to the extended version. Q9. Eq. 11 the eigenvalue should be probably $\lambda_l$ and not $\lambda_j$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss the limitations of their work, and in particular, the construction of the graph which I think is critical for the performance of the model. The methodology that is propose does not have a direct negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"the construction of the graph and how well this recovers the actual structure of the underlying manifold"* * This is indeed a very appropriate valid concern. We apologize for not explicitly discussing the limitations in the paper. We sketched a paragraph on this (see general response) and aim to include it into the new version of the paper. KNN graph can fail to capture the geometry. Because of the asymptotic convergence, it should not be a problem when there is a lot of (potentially unlabeled) data. This is why we emphasize the semi-supervised case where large number of unlabeled data points can facilitate small data learning from the labeled ones. Furthermore, our framework, being fully differentiable, sets up the groundwork for more sophisticated techniques that might include *graph learning* as in, for example, Kazi et al (2022). In this particular paper, however, we wanted to describe the simple model and lay groundwork for the possible extensions. *"The paper would greatly benefit from additional empirical findings."* * In addressing the high-dimensional context, we employed the rotated MNIST dataset where we have good reasons to believe that the manifold hypothesis holds. Other high-dimensional datasets may lack an explicit enough underlying manifold structure. The attached PDF (see general response) contains a table presenting preliminary results in the supervised scenario for a random dataset from the UCI ML Repository. RMSE performance with IMGP surpassed that of EGP for larger sample sizes. However, NLL exhibited a less favorable trend compared to our observations for rotated MNIST. This warrants further investigation. It is possible that the chosen dataset, despite the high-dimensional feature space, does not inherently embody a manifold structure or such a structure could be highly irregular. This is supported by the need for a substantial number of eigenpairs in achieving satisfactory outcomes in contrast to rotated MNIST i.e. slow spectrum decay. We will discuss this example or a similar additional example in the paper as well as an example for a manifold in 3D (results in this case are also present in the attached pdf) that we excluded before to save space. *"Q1"* * Yes, this is discussed in detail in Borovitskiy et al. (2021). This reference was accidentally removed from lines 98-100 in one of the final versions of the draft. Thank you very much for spotting this! *"Q2"* * There is plenty of ways to estimate $\operatorname{dim}(M)$, from classical(Levina and Bickel, 2004) to modern (Denti et al. 2022). We do not estimate $\operatorname{dim}(M)$ though, for multiple reasons. First and foremost, because this merely results in a reparameterization of the Matérn family: the meaning of parameter $\nu$ becomes different, but in any case it is an unknown parameter to be chosen somehow. Second, the possible values of $\nu$ will be usually restricted by the problem size and computational resources at hand because larger $\nu$ implies higher costs. A reasonable approach is to choose $\nu$ by grid search where the grid consists of small integer values $1, 2, 3, \ldots$. *"Q3"* * Larger values of $K$ should in principle---and usually in practice, as we observed---lead to better results: the weighting of the graph does the heavy-lifting of capturing the geometry. As a rule of thumb you should use $K$ as large as you can afford. *"Q4"* * The KNN graph we are using is always connected. We further use such bandwidth prior that prevents weights from being overly small. To consider datasets with well separated connected components, one can consider clustering them and then applying our technique to each of the clusters separately. This would correspond to the geometric assumption of having a disconnected manifold. This would, however, often make little sense because the aforementioned implies zero correlation between points on different connected components, i.e. this would correspond to modeling the connected components independently. Some datasets may posses a hierarchical structure: have many (small) disconnected components which, as whole components, covary with one another. This might require some modified geometric assumptions (and model) to handle. *"Q5"* * We expect that convergence still holds for KNN graphs even in the case of a non-uniform density (when using Coifman's normalized adjacency matrix $\mathbf{A} = \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-1}$), but such a result cannot be found in the literature, to the best of our knowledge. When the distribution is uniform and using the usual adjacency matrix ($\mathbf{A} = \tilde{\mathbf{A}}$) the convergence for KNN graphs is studied, for example, in Calder and Trillos (2022). *"Q6"* * Thank you for spotting this. We realize now that this statement is unclear, the term "weighting scheme" is nor very intuitive nor is it introduced in any way. What is meant here is that, following Coifman et al. (2006), we (1) use the random-walk normalized Laplacian, as opposed to the unnormalized one or the symmetric normalized one and (2) use the normalized adjacency matrix $\mathbf{A} = \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-1}$ to counteract the presence of non-uniform density---using this adjacency is what we meant by the "weighting scheme" because it defines graph weights. *"Q7"* * Like, for example, in Section 3 of https://proceedings.neurips.cc/paper_files/paper/2003/file/cf05968255451bdefe3c5bc64d550517-Paper.pdf. *"Q8"* * Thank you for this suggestion. We did not include such a picture because of continuity: the values of the eigenvector match the values of the extension close to the manifold. We are happy to add the picture you requested to the appendix. *"Q9"* * This is a typo indeed, thank you very much for spotting this! *"The authors do not discuss the limitations of their work"* * Thank you for mentioning this. Please see the general response. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I would like to thank the authors for their responses and the additional demonstrations. Regarding the graph construction, I agree that having more data implies that $K$ becomes a less critical parameter, but instead, I believe that the bandwidth of the kernel becomes critical for capturing well the structure of the underlying manifold with the graph. Anyways, this is a classic issue in graph construction, and the current papers focus on a different problem. I also agree that the manifold assumption does not hold in many of the real-world datasets. I recommend the authors include in the updated paper 3D demonstrations, potentially showing the eigenvectors together with the extended version (e.g. in Fig.3 ), and also discussing clearly the limitations of the proposed approach. As other reviewers mentioned, the novelty of the paper is somewhat limited in the sense that it combines previous approaches in a practical way. Overall and in light of the other reviews with there being a consensus for acceptance, I increase my score and vote for borderline acceptance. --- Reply to Comment 1.1.1: Comment: We thank the referee for raising the score and supporting paper acceptance. We will implement the suggested changes in the camera ready version.
Summary: This work extends the reach of Matern Gaussian processes to additionally learning the implicit low-dimensional (unknown) manifold the data lives on - the existence of such a manifold is suggested by the manifold hypothesis. The theory draws from existing Laplacian Matern Gaussian processes (Borovitskiy et al., 2020). The model they propose is differentiable wrt all model and geometry hypers. It is able to scale to thousands of data points by leveraging the sparse structure of Mather precision matrices. They provide a way to extend predictions to the whole R^{d} space by reverting to a Euclidean GP away from the manifold. The experimental evaluation shows support for the implicit manifold Gaussian process. Strengths: - The end-to-end differentiability where the kernel hyperparameters along with those that parameterise the underlying geometry can be learnt simultaneously using a single objective. - Scalability is achieved as the precision matrix corresponding to the KNN implied graph is sparse. - The semi-supervsied case is interesting where a large amount of unlabelled data is leveraged to infer the underlying geometry of the manifold through a weighted nearest neighbour graph. All these points contribute to the quality and significance of the work. Weaknesses: - The exposition could benefit from an algorithm style pseudocode for the training and prediction steps, it should start with the interpretation of the data as a graph, computation of the elements needed for the Matern kernel etc - as I am a bit confused about the order of the steps. - While the details are all there in section 3.2, I think there need to be separate sections for construction of the matern kernel on graph nodes, computing the dependencies ie. the eigenpairs of the graph Laplacian, computing the kernel on arbitrary vectors in R^{d}, computing the predictive posterior. - I dont understand this line - pls explain or rewrite. Basically, why dont you just say how do you compute dim(M)? Essentially, from the theoretical results of Section 3.1 we borrow a particular weighting scheme, aiming to cancel out the possibly non-uniform density, and a specific choice of the graph Laplacian (the random walk normalized one). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The final predictive distribution is a mixture of Gaussians - one is the canonical Euclidean posterior predictive with SE-ARD kernel and the other is the geometry aware kernel underlying the same canonical posterior predictive equations? - What is the order of the steps in training - the log marginal likelihood depends entails the precision matrix which depends on the eigenpairs, but you say in line 206 that they are computed after the hyperparameters are found, these are mentioned in line 216 - is the \hat{\kappa} the number of neighbours of the KNN graph? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think authors should devote a paragraph to this. I don't see limitations discussed anywhere. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *"The exposition could benefit from an algorithm style pseudocode ..", ".. I am a bit confused about the order of the steps.", "What is the order of the steps in training"* * Thank you for mentioning this. We will emphasize the general flow of the algorithm. To clarify, the algorithm can be summarized as follows. 1. We compute KNN index using FAISS. This is enough to define matrix-vector products (MVPs) with the weighted adjacency matrix of the graph. 2. We find the optimal hyperparameters $\hat{\mathbf{\theta}}$ by maximizing the likelihood (14). Here we assume that $\nu$ is a small integer and use Proposition 2 to efficiently compute MVPs with the precision matrix using MVPs with the adjacency. 3. We compute a set of eigenpairs corresponding to the smallest eigenvalues of the Laplaican (with fixed hyperparameters $\hat{\mathbf{\theta}}$) using the Lanczos algorithm. 4. We define the kernel as $$ k(\mathbf{x}, \mathbf{x}') = \frac{\hat{\sigma_f}^2}{C_{\hat{\nu}, \hat{\kappa}}} \sum_{l=1}^L \Phi_{\hat{\nu}, \hat{\kappa}}(\lambda_l) f_l(\mathbf{x}) f_l(\mathbf{x}'), \qquad \Phi_{\nu, \kappa}(\lambda) = \left(\frac{2 \nu}{\kappa^2} + \lambda\right)^{-\nu} $$ where $f_l$ are the Laplacian eigenvectors extended to the whole $\mathbb{R}^d$ via (11). 5. We compute the posterior $f^{(m)}$ corresponding to the kernel $k$. We also perform Gaussian process regression (including hyperparameter tuning) with the Euclidean squared exponential kernel, we call the result $f^{(e)}$. The final predictive model is then given by their weighted average (12). *"While the details are all there in section 3.2, I think there need to be separate sections for construction of the matern kernel on graph nodes, computing the dependencies ie. the eigenpairs of the graph Laplacian, computing the kernel on arbitrary vectors in $R^{d}$, computing the predictive posterior."* * Thank you for suggesting how we can make the presentation more clear. This suggestion is clearly connected to the one above and to the questions some other referees have. We will revisit the presentation in Sections 3 and 4 to make them better structured, emphasize specific algorithm steps and their order, as well as to distinguish our own contributions there from the previous work. We believe it will be fairly easy to do this and this will not require major changes. *"I dont understand this line - pls explain or rewrite."* * Thank you for spotting this. We realize now that this statement is unclear, the term "weighting scheme" is nor very intuitive nor is it introduced in any way. What is meant here is that, following Coifman et al. (2006), we (1) use the random-walk normalized Laplacian, as opposed to the unnormalized one or the symmetric normalized one and (2) use the normalized adjacency matrix $\mathbf{A} = \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-1}$ to counteract the presence of non-uniform density---using this adjacency is what we meant by the "weighting scheme" because it defines graph weights. *"how do you compute dim(M)?"* * There is plenty of ways to estimate $\operatorname{dim}(M)$, from classical (Levina and Bickel, 2004) to modern (Denti et al. 2022). We do not estimate $\operatorname{dim}(M)$ though, for multiple reasons. First and foremost, because this merely results in a reparameterization of the Matérn family: the meaning of parameter $\nu$ becomes different, but in any case it is an unknown parameter to be chosen somehow. Second, the possible values of $\nu$ will be usually restricted by the problem size and computational resources at hand because larger $\nu$ implies higher costs. A reasonable approach is to choose $\nu$ by grid search where the grid consists of small integer values $1, 2, 3, \ldots$. *The final predictive distribution is a mixture of Gaussians - one is the canonical Euclidean posterior predictive with SE-ARD kernel and the other is the geometry aware kernel underlying the same canonical posterior predictive equations?* * Almost, with one clarification, the final predictive distribution is a *Gaussian distribution*, not a mixture of Gaussians. Its mean and variance are convex combinations of the mean and variance of the canonical Euclidean posterior and the posterior under the geometry aware kernel similar to what you suggested. Another point to note is that in our experiments we did not use the ARD kernel, just the plain kernel with one length scale although nothing prevents you from using it. *"the log marginal likelihood depends entails the precision matrix which depends on the eigenpairs, but you say in line 206 that they are computed after the hyperparameters are found, these are mentioned in line 216"* * You are right that the precision matrix can be (approximately!) expressed in terms of the eigenpairs corresponding to the smallest eigenvalues of the Laplacian. However, thanks to Proposition 2, for integer values of $\nu$ the precision matrix can also be represented (exactly!) as a polynomial of the Laplacian. This allows us to efficiently compute exact matrix-vector products with the precision without evaluating the eigenpairs on each optimization step, and to find the hyperparameters without inefficient differentiation through eigenpair-computing routines. *"is the $\hat{\kappa}$ the number of neighbors of the KNN graph?"* * No, $\hat{\kappa}$ is the length scale of the geometric kernel, after optimization. *"I think authors should devote a paragraph to this. I don't see limitations discussed anywhere."* * Thank you for mentioning this. Please see the general response. --- Rebuttal Comment 1.1: Title: Post rebuttal comment Comment: I would like thank the authors for their response and clarifications. I am happy to support this work and raise my confidence score to a 4 and keep my overall score intact at 6. I would urge the authors to restructure section 3.2, clarify the training algorithm and ensure that all the parameters are described before being used in equations, for example \hat{\kappa}.
Rebuttal 1: Rebuttal: We thank the referees for their valuable summaries and insights which will help us improve the paper. We have replied to every review with detailed comments. An essential suggestion made by many referees was to add an explicit discussion of limitations. One major limitation is that there is a trade-off between the complexity of geometry and non-uniformity of sampling data on one side and the number of unlabeled data points needed to capture meaningful structure on the other side. The model works well when the former is not too pronounced and the latter is large. Further, the model could struggle for clustered data. Possible avenues for improving on the mentioned drawbacks include learning a graph in a more sophisticated way (e.g. adaptive bandwidth, differentiable graph learning) or clustering and combining models. Another limitation is that using larger values of $\nu$ and $K$ result that may be desirable sometimes imply higher costs and are not always feasible. We fully agree that including this explicitly would improve paper quality and aim to include it in the camera-ready version. Pdf: /pdf/9b08e1946799a1419deec66a59cff399bdd9817e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Context-PIPs: Persistent Independent Particles Demands Spatial Context Features
Accept (spotlight)
Summary: This paper aims at estimating persistent long-term trajectories of query points in videos. For ignoring the potential benefits of 4 incorporating spatial context features, this paper argues that independent video point tracking also demands spatial context features. And this paper proposes a novel framework Context-TAP, which effectively improves point trajectory accuracy by aggregating spatial context features in videos. The framework Context-TAP contains two modules: 1) a source Feature Enhancement module, and 2) a target Feature Aggregation module, enhancing information with surrounding information from both source image and target image respectively. The Context-TAP ranks 1st on the four benchmarks 66 and shows clear performance superiority. Strengths: 1). This paper is well-written and easy to understand 2). Experimental results are extensive and better than existing baseline models Weaknesses: The motivation and the solution to the problem are reasonable but seem regular approaches and it’s very common. The use of spatial context, regression samples to assist with features, has been seen very often. Therefore, the novelty of the Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1)In the SOFE module, are auxiliary features involved in the update iteration? 2)whether the inference process can be computed in parallel, and whether the computation time is the same or multiplied compared to the computation of only two frames of optical flow 3)Is it possible to visualize predicted samples to see if they actually capture semantically useful key points? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: For the outlook in limitation is an important task to be solved in the TAP task, with a practical application scenario Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The novelty of using spatial context. Spatial context is independent of temporal information. However, most of the existing methods mainly use temporal information. We argue that our spatial context feature component is novel: 1. We did not naively adopt common spatial context features techniques[1-2]. Instead, we fuse the cost information of the auxiliary context features for point tracking refinement, which has never been explored before. This may also further motivate the network design in other tasks. 2. We first show that target spatial context features improve point-tracking, which is ignored in closely related correspondence tasks such as optical flow and stereo matching. As suggested by Reviewer ujcD, our design may be further utilized in other tasks such as COTR and optical flow. > Are auxiliary features involved in the update iteration? Yes. > Whether the inference process can be computed in parallel Yes. > Whether the computation time is the same or multiplied compared to the computation of only two frames of optical flow. No. For an 8-frame Flyingthings++ sequence. The Context-TAP consumes 0.225s while 7 RAFT consumes 2.244s. Almost x10 slower. > Is it possible to visualize predicted samples to see if they actually capture semantically useful key points? Yes. We will add the visualization of learn-to-sample results in the final version. [1] Luo, Hao, et al. Object detection in video with spatial-temporal context aggregation. [2] Simon and Liu. Context-aware synthesis for video frame interpolation.
Summary: This paper tackled the problem of Trakcing Any Point (TAP). Given a query point and a series of video frames, output all the coordinates corresponding to the query point in the video frame. The problem is interesting and can be regarded as an extension of optical flow. PIPs only takes the features corresponding to the query point because this task estimate spoint trajectories independently. This paper takes the features surrounding the query point, called as ``spatial context features’’, to improve the accuracy of the point trajectory estimation. The motivation is reasonable. Experiments also demonstrate that Context-TAP outperformed PIPs by large margins and also improves the efficiency. Strengths: 1. The proposed SOFE module learn to sample more features in the source image significantly improves the performance. I think the essence behind the module is the enlargement of perception field in the cost volume. In recent years, point-based correspondence estimation emerges, such as COTR[1] and ECO-TR[2]. The proposed technique may also be applied to this methods. 2. Context features are widely adopted in many SOTA correspondence models, such as stereo matching [3]and optical flow[4], i.e., the $F$ in Context-TAP, but they focus on the context features of the original pixel. This paper further aggregates the context features in the target images via attention mechanism (TAFA), which may also motivate the model design in the other correspondence tasks. 3. The motivation is well presented. 4. The experiments show that with the aid of the context features, the MLP-Mixer layers can be largely decreased, which improves the efficiency. 5. The ablation study is conducted thoroughly, revealing the necessity of the attention based TAFA and the number of samples. [1] Jiang et. al. COTR: Correspondence Transformer for Matching Across Images [2] Tan et. al. ECO-TR: Efficient Correspondences Finding Via Coarse-to-Fine Refinement [3] Lipson et.al. RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching [4] Teed and Deng RAFT: Recurrent All-Pairs Field Transforms for Optical Flow Weaknesses: 1. TAP-Net also provide the occlusion accuracy (OA) metric. I am curious about how the spatial context features affect the occlusion prediction accuracy, but there is no discussion in the experiments. What is the reason? 2. The efficiency comparison only compares the number of parameters in the paper. I think time and memory usage is more important in practice. 3. I find that the authors provide a better model $K=6$ in the supplementary. What’s the reason? Should this model be moved to the main paper? 4. The benefit brought by the TAFA is not large. 5. The SOFE compute correlations between additional source features and target features. There are some other correlations that can be computed as context features. For example, the correlation between adjacent target images. Would these additional information also benefits TAP? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Seeing Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Context-TAP adopts a fixed window size and handles long-term videos in a sliding manner. Context-TAP may struggle to track a point occluded in a long time interval. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > TAP-Net also provide the occlusion accuracy (OA) metric. I am curious about how the spatial context features affect the occlusion prediction accuracy, but there is no discussion in the experiments. What is the reason? The OA comparison is listed below: | Method | K | MLP-Mixer Depth | TAP-Vid-DAVIS (first) | TAP-Vid-Kinectics (first) | TAP-Vid-DAVIS (strided) | TAP-Vid-Kinetics (strided) | |--------------------|---|-----------------|-----------------------|---------------------------|-------------------------|----------------------------| | TAP-Net | - | - | 78.8 | 80.6 | 82.3 | 85.0 | PIPs (Re-imp.) | 6 | 12 | 79.3 | 77.0 | 82.9 | 81.5 | | PIPs(Released) | - | - | 79.0 | 75.7 | 83.2 | 81.0 | | Context-TAP (Ours) | 6 | 12 | 79.5 | 79.8 | 83.4 | 83.3 | > The time and memory usage comparison. We will add the memory usage and FLOPs in the final version. When the number of MLP-Mixer layers is significantly reduced, the parameters of the network (11.54M v.s. 28.67M) and the FLOPs (216.4G v.s. 287.5G) evaluated with pytorch-OpCounter[1] for point tracking are reduced up to 59.7% and 24.7%. > Why the model K=6 is in the supplementary? Due to the limited time, we could not finish this experiment before the submission deadline. We will move the results to the main paper in the final version. > The benefit brought by the TAFA is not large. The improvement is still large by reducing the 5.9% Average Trajectory Error of Occluded Points (ATE-Occ) on CroHD. > Would more additional information also benefit TAP? Thanks for your suggestions. We believe so and this will become our future work. [1] https://github.com/Lyken17/pytorch-OpCounter --- Rebuttal Comment 1.1: Title: Reply authors Comment: Thanks for the authors' responses. I still hold on my score.
Summary: This paper presents some technical improvements to PIPs, which is a state-of-the-art method for multi-frame pixel tracking. There are two key modifications to the architecture. The first is: rather than only use the feature of the target to represent the appearance of the target, look at the cost map and estimate a few offsets, and sample additional appearance features at these offsets; this yields additional correlations to be used by the temporal model (the MLP-Mixer). The second modification is: on each frame, do some QKV attention to obtain some additional features, and use these to perform a feature update, in addition to (or instead of (this wasn't entirely clear)) the feature update normally done by the model. These modifications yield 1-2 points gain in accuracy in each dataset considered. Strengths: Using additional appearance information from the first frame, and additional appearance information across time, are very sensible contributions over PIPs. It is also exciting that these modifications were achieved without increasing the computational complexity of the method. Weaknesses: For me, the main weaknesses have to do with clarity, mostly in the writing, but also in the results. In the experiments, it would be nice to see the "d_avg" metric reported on the TAP-Vid benchmarks, as computed in the TAP-Vid paper. Or maybe this is re-named here to A-PCK? Overall, in Table 1 and Table 2, it does not seem like the PIPs results on DAVIS or Kinetics exactly match the table from the TAP-Vid paper. Why is this? There is some difference between "PIPs (Paper)" and "PIPs (Re-imp.)", but I could not find text talking about this. I understand that the PIPs github provides a model slightly improved over the original paper, plus maybe some bug-fixes. Is that the re-implementation referenced here? Or is this a new implementation, produced independently by the authors? "We train our Context-TAP and PIPs with different MLP-Mixer depths, i.e., the number of layers in the MLP-Mixer, to show the extraordinary efficiency and effectiveness of our proposed Context-TAP." I am not sure that training with different MLP-mixer depths has anything to do with "extraordinary" aspects of the method or results. Perhaps this sentence should be re-considered, or supported in some way? Overall, considering that the method here builds on PIPs and not TAP-Net, it seems to me that a less confusing name would be something like Context-PIPs (rather than Context-TAP). I think the acronyms SOFE and TAFA are not improving the clarity of the paper. In lines 62-67, the first contribution seems to be just a high-level idea; the second contribution seems to be two independent contributions, and the third contribution is actually a measurement of the middle contributions. I think this could be rewritten into two good contributions plus an evaluation. "PIPs and TAP solve the video particle tracking problem in a similar manner, i.e., recurrently refining multi-frame point trajectory via correlation maps." It seems not accurate to say that TAP-Net involves iterative/recurrent refinement. SOurse -> Source eveluated -> evaluated Technical Quality: 3 good Clarity: 3 good Questions for Authors: "The augmented correlation features Cˆ in Eq. 3 encode abundant visual similarities. Therefore, we generate a query from it to extract target context features ..." I do not understand this part. The correlations contain similarity information, but not appearance information. It sounds like they are being used to create an appearance query. How (or why) does this work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Looks fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > In the experiments, it would be nice to see the "d_avg" metric reported on the TAP-Vid benchmarks, as computed in the TAP-Vid paper. Or maybe this is re-named here to A-PCK? Yes. We just renamed "d_avg" to A-PCK here because the Average Percentage of Correct Keypoints (PCK) is a more common terminology in correspondence tasks[1-3]. We will clarify the name in the revised version. > Overall, in Table 1 and Table 2, it does not seem like the PIPs results on DAVIS or Kinetics exactly match the table from the TAP-Vid paper. Why is this? Because we cannot exactly reimplement the performance of the released model even with the official repository. We thus provide three versions of PIPs: PIPs (paper), PIPs (re-implement), and PIPs (released). We align all the training settings of PIPs (re-implement) to our Context-TAP for a fair comparison. The other two results are provided in the supplementary for reference. Notice that our PIPs (re-implement) is even better than PIPs (released) on Kinetics because the released PIPs tends to overfit FlyingThings++ dataset, as discussed in L12 of our supplementary. TAP-Net evaluates PIPs on the DAVIS and Kinectics with the released model. We also provide such model named "PIPs (released)". Its performance is improved because we slightly tune the visibility threshold in the chaining rule of PIPs. Specifically, for a video longer than 8 frames, PIPs iteratively selects new visible starting points and chains the trajectory. In PIPs, a point is visible if the predicted visibility is larger than 0.9. However, we observe that there is a significant visibility distribution bias on the training set of FlyingThings++. We use 0.9 as the visibility threshold on DAVIS so that PIPs achieved slightly better results than those reported by TAP-Net. > There is some difference between "PIPs (Paper)" and "PIPs (Re-imp.)", but I could not find text talking about this. I understand that the PIPs github provides a model slightly improved over the original paper, plus maybe some bug-fixes. Is that the re-implementation referenced here? Or is this a new implementation, produced independently by the authors? We provide the details of our PIPs re-implementation in the L12 of our supplementary materials and will make it clearer in the final version. We use the code from the PIPs repository to re-implement the PIPs. As stated in the previous question, we cannot perfectly reproduce the numbers reported by the PIPs paper or the model released in the repository, so we align the training settings of PIPs with our Context-TAP for a fair comparison. > I am not sure that training with different MLP-mixer depths has anything to do with "extraordinary" aspects of the method or results. Perhaps this sentence should be re-considered, or supported in some way? Here we want to express that when the spatial context information is introduced, comparable results can be obtained even if the number of MLP-Mixer layers is significantly reduced. The parameters of the network (11.54M v.s. 28.67M) and the FLOPs (216.4G v.s. 287.5G) evaluated with pytorch-OpCounter[4] for point tracking are reduced up to 59.7% and 24.7%, which is prominent. We will replace the extraordinary with "prominent". > Overall, considering that the method here builds on PIPs and not TAP-Net, it seems to me that a less confusing name would be something like Context-PIPs (rather than Context-TAP). A5.Thanks for your suggestion. The main reason we use TAP is that we think Tracking Any Point (TAP) is a more general and easy-to-understand term compared with Persistent Independent Particles (PIPs). > I think the acronyms SOFE and TAFA are not improving the clarity of the paper. The acronyms SOFE and TAFA are mainly to shorten the name of the module, which is a commonly adopted strategy in previous literatures such as LoFTR[1], RAFT[6], and GMA[5]. We will also carefully consider the naming in the final version, such as Source feature Enhancement Module (SEM) and Target feature Aggregation Module (TAM). > In lines 62-67, I think this could be rewritten into two good contributions plus an evaluation. Thanks for the suggestion! We will reorganize the description of our contribution. >"PIPs and TAP solve the video particle tracking problem in a similar manner, i.e., recurrently refining multi-frame point trajectory via correlation maps." > It seems not accurate to say that TAP-Net involves iterative/recurrent refinement. > SOurse -> Source > eveluated -> evaluated We would refine the statement and typos in the final version. > "The augmented correlation features Cˆ in Eq. 3 encode abundant visual similarities. Therefore, we generate a query from it to extract target context features ..." > I do not understand this part. The correlations contain similarity information, but not appearance information. It sounds like they are being used to create an appearance query. How (or why) does this work? Pixels that hold similar appearance features share similar motions, so that we learn to find auxiliary features that provide informative motion cues through the feature similarities, i.e., correlations. This design is inspired by GMA [5], which smoothes the optical flow estimation via averaging the flow with appearance similarities. [1] Sun, Jiaming, et al. LoFTR: Detector-free local feature matching with transformers. [2] Truong, Prune, et al. Pdc-net+: Enhanced probabilistic dense correspondence network. [3] Huang, Zhaoyang, et al. Neuralmarker: A framework for learning general marker correspondence. [4] https://github.com/Lyken17/pytorch-OpCounter [5] Jiang, Shihao, et al. Learning to estimate hidden motions with global motion aggregation. [6] Zachary and Deng. Raft: Recurrent all-pairs field transforms for optical flow.
Summary: This work proposes a method for video point tracking. It is built upon the previous method of persistent independent particles (PIPs). The authors add context features to the source and target feature encoding in PIPs. The resulting method is called Context-TAP. The proposed method is evaluated on multiple benchmarks for video point tracking. Improved tracking accuracy is observed over PIPs and other baseline methods. Strengths: + Using more context features is tried and true in many vision tasks. This work shows it is also helping with the PIPs method. + The paper is generally well written with thorough experiments. The authors provide a detailed ablation study on the design parameters in this method. Weaknesses: - My primary concern is about the scope of this work. The claim is that video point tracking needs contextual information, but the execution is a modification to a specific method for this task. Whether the proposed modification is general enough to boost multiple methods or only specific to PIPs is unknown. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Based on my primary concern, my first question is: do the proposed modules work on any other method that deals with this task? - From the comparison with baseline methods, the numeric improvement of tracking accuracy seems less drastic than that of PIPs over other methods. An explanation of the accuracy improvement would help justify the significance of this work. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have stated the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Do the proposed modules work on any other method that deals with this task? Yes, we think so because both recent point-tracking networks[1-2] are built upon PIPs, which iteratively refine the point trajectories via cost information. However, they only take the features of the query points into consideration. It is natural to add our modules to gain benefit from the spatial context. We only test our modules on PIPs because it is the first and only method that provides open-sourced PyTorch code for point-tracking. Applying our modules to other point-tracking networks would be our future work. > From the comparison with baseline methods, the numeric improvement of tracking accuracy seems less drastic than that of PIPs over other methods. An explanation of the accuracy improvement would help justify the significance of this work. Our Context-TAP outperforms PIPs by reducing 11.4% Average Trajectory Error of Occluded Points (ATE-Occ) on CroHD and increasing 11.8% Average Percentage of Correct Keypoint (A-PCK) on TAP-Vid-Kinectics. PIPs’ improvement is drastic because it is the first method that learns point tracking with longer temporal information and the new FlyingThings++ dataset. The methods which PIPs compared to, such as RAFT, were not designed for the TAP task. [1] Carl et.al. TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement. [2] Yang, et al. PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point Tracking.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bayesian Learning of Optimal Policies in Markov Decision Processes with Countably Infinite State-Space
Accept (poster)
Summary: This paper focuses on an online learning setting for Markov Decision Processes (MDPs) with a countably infinite number of states. It adopts a Bayesian learning perspective, assuming that the parameters of the MDP follow a prior distribution over a known parameter space. The paper proposes a Thompson-sampling-like approach to solve the MDP in an online fashion. This approach assumes access to an optimal policy oracle, where the parameters of the MDP are provided as inputs, and it also relies on specific assumptions about the features of the parameter space. Strengths: The model investigated in this paper exhibits a high degree of generality, and the results presented contribute significantly to the field of theoretical reinforcement learning by offering near-optimal algorithms for MDPs without a bounded state space. The inherent complexity of the problem necessitates intricate proofs, and although I haven't examined the complete proof in detail, I have confidence in the correctness of the underlying intuitions. The proof combines Lyapunov analysis with the proof presented in [38] for Bayesian learning in an MDP with a bounded state space, thus offering a potentially valuable contribution for future research. Additionally, the simulations conducted in the paper, which demonstrate the scaling of the algorithm's regret, are appreciated for providing empirical evidence supporting the algorithm's performance. Weaknesses: The results of the paper rely on a set of assumptions that may be difficult to verify. Of particular concern is Assumption 3, which assumes stability of the optimal algorithm under one set of parameters for the MDP, even when considering another set of parameters. Establishing this property for more general systems can be challenging and requires careful calibration of the parameter space and policy space. The algorithm presented in the paper is heavily dependent on access to an oracle capable of solving the optimal policy, which itself is a complex problem for general queueing systems. This reliance on an oracle can limit the practical applicability of the algorithm. The algorithm necessitates returning to state 0 (line 14) at the end of each episode. This requirement could result in an exponential dependence on the maximum queue length, potentially rendering the algorithm less relevant for practical implementation. As a result, the claim of practicality made in line 345 may not be adequately supported. In relation to the previous point, the paper obscures many constants within the theoretical results. These constants, associated with the system's dimension and ergodicity, could play a crucial role in determining practical performance and should be given more attention and consideration. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there a heuristic for designing the parameter space and policy space to ensure that Assumption 3 holds for a queueing system? What would happen to the algorithm if this assumption fails? It would be beneficial to test the algorithm in a more general system, such as the ones described in [41, 52]. How crucial is the requirement for an optimal policy oracle? Would the analysis still hold true if the policy space is restricted to, for example, the MaxWeight policy? It would be highly valuable if the results were presented in a form such as "the total queue lengths of this algorithm minus that of the MaxWeight policy is less than or equal to \sqrt{T}". This would also eliminate the need for the optimal oracle. Could you discuss the dependence of the regret on the system size? Why does the dependence on T for the regret hold more significance than these constants? Please maintain consistency by using either "queueing" or "queuing" throughout the paper, but not both. In line 122, after "ergodicity)", there should be a period. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions on the presentation of the paper. We will make changes in the final version based on the suggestions. >Weakness 1. Necessity of assumptions. A. Given a specific parameter class $\Theta$, the assumptions can be verified for all MDPs corresponding to $\theta \in \Theta$, as demonstrated in Appendix E for the two queuing models of Figure 2. While there isn't a programmatic/algorithmic way to find Lyapunov functions, queueing systems have accumulated a repository of Lyapunov functions (for classes of models and policies) over the years, making queueing systems a reasonable application domain to consider. Assumption 3 is reasonable for queueing models since policies such as weighted Max-Weight are stabilizing for a large class of models. However, to the best of our knowledge, proving geometric ergodicity in such settings is an open problem; we prove it for the example of Figure 2b as we're using weighted Max-Weight. >Weakness 2. Requirement of an optimal policy oracle. A. For the requirement of an optimal policy oracle, please see Remark 2 in the global response. >Weakness 3. Bounds on hitting time of state $0^d$. A. Assumption 4 bounds the first $r+1$ moments of hitting time of state $0^d$ from any initial state $\boldsymbol{x}$ using the polynomial Lyapunov function (from Assumption 4); see Lemma 10. In contrast to the Lyapunov function used to prove geometric ergodicity (from Assumption 3) which is usually an exponential function of some norm of the state, the Lyapunov function used to show polynomial ergodicity (from Assumption 4) is often a polynomial function of the state, which then leads to the hitting time to $0^d$ being polynomially dependent on the initial state $\mathbf{x}$. Our queueing examples in Appendix E employ exponential Lyapunov functions to prove geometric ergodicity, and quadratic Lyapunov functions to establish polynomial ergodicity, with the latter leading to polynomial bounds for the hitting time of state $0^d$. >Weakness 4. Dependence of regret bound on problem parameters. A. Please see Remark 3 in the global response. >Question 1. Requirement of Assumption 3. A. Assumption 3 needs to be verified on a case-by-case basis for the optimal policy of each model that can occur; see Figure 2a example. This is challenging since optimal policies have been characterized only for a small number of systems. However, within a given policy class, more general statements can be made-e.g., checking Assumption 3 for Max-Weight policies for a large class of queueing systems is feasible. However, even for MaxWeight policies establishing geometric ergodicity would be a new contribution to the literature, as the bulk of existing results establish ergodicity using a quadratic Lyapunov function. Note that the queueing model of Figure 2b (exemplifying Cor 1) uses MaxWeight policies a'la [52], and is illustrative of such settings. The geometric ergodicity characterization for this queueing model is new in the literature, so for the models in [52] too a geometric ergodicity result is needed (we expect this to hold based on their model assumptions). Whereas at present we don't know how to weaken Assumption 3 to an assumption similar to Assumption 4, we believe that stability assumptions across models are likely necessary for the countable space setting, and for the following reasons. First, in contrast to finite-state MDPs, where stability and existence of a stationary distribution are assured in a simple manner-via irreducibility or the existence of a single recurrent class, and aperiodicity-, the countable state-space setting needs additional conditions to ensure that the Markov process resulting from using any stationary policy $\pi \in \Pi$ is positive recurrent or ergodic. Furthermore, recovering after either using an unstable policy or starting in a transient state can be problematic, and with a countable set of transient states $S^{\mathrm{tr}}$, the expected time to exit $S^{\mathrm{tr}}$ can be infinite. >Question 2. Requirement of an optimal policy oracle. A. Please see Remark 2 in the global response. We also want to note that the queueing model of Figure 2b uses a weighted Max-Weight policy-we are selecting the best set of weights for each model within a class of weighted Max-Weight policies using the optimal oracle (the PPO algorithm used in simulations), and then regret is compared relative to the performance of the best weighted Max-Weight algorithm. >Question 3. Dependence of the regret on the problem parameters. A. Please see Remark 3 in the global response. We conclude by emphasizing that our sub-linear regret guarantee provides a rate of convergence characterization for asymptotic optimality results developed in past work, such as [18,27]. --- Rebuttal Comment 1.1: Comment: The authors' response addresses my questions, and I am particularly satisfied with the extension (Remark 2) that an optimal policy oracle may not be needed as long as the stability assumptions can be verified. I would like to raise my rating to 7. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback.
Summary: The authors study Bayesian learning of the problem of optimal control of a family of discrete-time countable state-space MDPs governed by an unknown parameter $\theta$ from a general parameter space $\Theta$ with each MDP evolving on a common countably-infinite state space $X$ and finite action space $A$. As the setting is Bayesian, they assume that the model is governed by an unknown parameter $\theta_\star \in \Theta $ generated from a fixed and known prior distribution. The learning goal: Bayesian regret minimization where the value function is the infinite-horizon average cost, and the regret is measured with respect to best policy in $\Pi$. They prove a $\sqrt{TA}$ regret bound, up to poly-logarithmic factors, but the dependency in the complexity of the function class is unclear to me. Disclaimer: I am not much familiar with this area of RL literature and hence might not understand the results correctly. Also, I could not fully verify the correctness of the presented results. I gave that assessment as the paper was very hard to follow for an unfamiliar reader. Strengths: 1. I think the results are a nice contribution to the Bayesian RL community. 2. The two examples presented contribute to the richness of the paper. 3. The presented bounds seem reasonable. 4. The adaption of Thompson sampling to countable state spaces might be useful in other RL settings. Weaknesses: 1. Abstract is quite long and parts of it seems like copy-paste of the introduction. 2. Writing requires improvement: (1) lightening the contribution compared to existing literature, (2) notation is hard to follow and makes proof reading hard, (3) complexity of $\Pi$ or the dimension $d$ should appear in the presented bounds, even if the dependency on them is logarithmic, those parameters are not negligible. 3. Need to mention the dependency in complexity of the function class is in the regret bound. Is it logarithmic for finite $\Pi$? Is the regret dependent on the covering number? I would be happy if the authors could clarify that point. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions on the presentation of the paper and other remarks, and we will make changes in the final version. We note that the references marked with a letter are listed at the end of the response and are not included in the submission. For discussion on the dependence of our regret bound on problem parameters, please see Remark 3 in the global response. Regarding dependence of the regret on the complexity of $\Pi$, using our method and assumptions, it is not easy to characterize the regret in terms of complexity and associate Lyapunov-type conditions with covering numbers. The complexity implicitly determines the uniformity conditions of Assumptions 3, 4, and 5, and it is challenging to characterize the regret separately in terms of complexity of $\Pi$. In essence, the bounds are akin to instance bounds. This is easily understood when $\Pi$ is finite, as the quantities $J^*$ and $r_*^p$ above are the result of the maximization of similar parameters over a finite number of policies and models. To address the dependency on the covering number, we will need to explore model mismatch/perturbations more carefully, but such results are not available currently for countable state-space MDPs (to the best of our knowledge). Our assumptions result in $\Pi$ being compact, and so it is totally bounded, and given any error terms, we can get a finite cover of the parameter space. By using the centers of these covers---sampling from posterior and projecting to the closer center that covers the sample---a theoretical analysis may be feasible for the general problem if the transition kernels depend smoothly on the parameters. To make the analysis work, we will need results that can compare the performance of optimal policies of two close models (close as per metrics suggested in [A] based on the closeness of the parameters). While such perturbation-related results are available in general state-space problems for finite-horizon and discounted cost problems, they only exist in the finite state-space setting for average cost optimal problems: [A] discusses the results for finite-horizon and discount cost problems; and the results of [B] can be extended to the average cost problem in the finite state-space setting by the vanishing discount method (taking a limit as the discount factor converges to $1$). ___ [A] Müller, A. ``How does the value function of a Markov decision process depend on the transition probabilities?" Mathematics of Operations Research 22.4 (1997): 872-885. [B] Subramanian, J., Sinha, A., & Mahajan, A. ``Robustness and sample complexity of model-based MARL for general-sum Markov games." Dynamic Games and Applications 13.1 (2023): 56-88. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and have no further questions. I would positively consider to raise my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback.
Summary: This paper present an adaptation of TSDE to parametric MDPs with unbounded state space. The regret is sqrt{T} which is good but that is under strong ergodic assumptions and lower order terms can harm the behavior of the algorithm for small values of T. Strengths: - The paper is sound technically. (I quickly checked the proofs) - The problem of learning unbounded MDP is a natural and important question to address. Weaknesses: 1. The strong conditions (especially Ass 4) are not necessary and sufficient conditions for existence of an optimal policy, nor for the existence of a solution of the Bellman equation. 2. The queueing examples are not really convincing: The optimal service rates can be computed efficiently by just estimating the arrival rate and solving the optimal control problem for the expected sojourn time. Also, numerically, the growing rate of the regret gets worse as the arrival rate increases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The assumption on the unicity of the optimal policy is not related with all the other assumptions and is not discussed. This is actually a strong assumption restricting the class of MDPs that can be learned. This deserves some discussion why it is needed here. 2. The authors should discuss the practical aspect of their approach. In particular, can one check whether all the assumptions are satisfied while the MDP is unknown? 3. The Q-learning approach seems more natural than a model based approach under unbounded space spaces, where only the visited states are used for learning. Several papers have already taken this option successfully and are not discussed here (see for example "Stable reinforcement learning with unbounded state space" by D Shah, Q Xie, Z Xu ) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I cannot see any limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review, and reference suggestions (which we will cite). The references depicted with a number indicate a reference in the submission and the references marked with a letter are listed at the end of the response. >Weakness 1. Necessity of assumptions. A. Assumptions 1 & 2, combined with the positive recurrence from Assumption 3, ensure the existence of an optimal policy and a solution to the average cost optimality equation (ACOE) & Poisson equation; see Section 2. Assumptions 3 & 4 are required for our analysis, as we need bounds on the moments of the maximum state norm reached by time $T$ as well as the hitting time to state $0^d$, which we found using the Lyapunov functions in Assumptions 3 & 4. For the average cost problem in countable spaces, establishing necessary and sufficient results for the existence of stationary optimal policies, or for solutions of the average cost optimality equation are open problems. We used conditions from [9] (that are satisfied using our assumptions). The weakest conditions are in [A], and using them is for future work, but we note the importance of the existence of stabilizing policies with finite average cost even in this paper; see [B] for a comparison of different conditions in the literature. Bounded costs can be problematic in the countable setting as they do not cover practical examples from queueing systems. Thus, we impose Assumption 1: our cost function is unbounded. >Weakness 2. Queueing examples. A. Whereas it may be possible to establish asymptotic optimality for a scheme employing rate estimates (using a modified MLE a'la adaptive control), to the best of our knowledge, there are no finite-time regret guarantees provided for such schemes. Existing related results-[18] (finite number of policies) and [27] (countable or uncountable number of policies)-prove asymptotic optimality but assume further structure on the transition kernels, which our examples do not satisfy. Let $\rho$ be the normalized load. Then, the gap from the capacity boundary is a linear function of 1-$\rho$. Thereafter, the regret growing as the normalized load goes to 1, i.e., gap going to 0, is expected, since the system gets closer to the stability boundary as the arrival rate increases. Based on Remark 3 in the global response, our regret bound depends on $J^*$ and, thus, on the gap from the capacity boundary and will increase as $\rho$ goes to 1, which reinforces our earlier comment. >Question 1. Unicity of the optimal policy. A. Uniqueness of the optimal policy is not essential for the validity of our results, provided that all optimal policies satisfy our assumptions. When this condition is not met, we need to select an optimal policy that is geometrically ergodic for all parameters. This could entail searching over all optimal policies when non-uniqueness holds. This issue can be avoided by using a smaller subset of policies, such as Max-Weight policies, for which the ergodicity can be established for all possible models. We will clarify this in the final version. >Question 2. Practicality of our approach. A. For a given parameter set $\Theta$, the assumptions can be verified for all MDPs corresponding to any $\theta\in\Theta$, as in Appendix E for queuing models of Figure 2. Whereas there isn't an algorithmic way to find Lyapunov functions, queueing systems have accumulated a repository of Lyapunov functions over the years, making queueing systems a reasonable application domain to consider. >Question 3. Q-learning based on Shah et al. [E] A. The differences with our work are as follows: 1. [E] ignores optimality and focuses on finding a stable policy, which contrasts with our work that evaluates performance relative to the optimal policy. 2.[E] considers a discounted reward problem, essentially a finite-time horizon problem (given the geometrically distributed lifetime). Average cost problems (as studied by us) are infinite-time horizon problems, so connections to discounted problems can only be made in the limit of the discount parameter going to 1. 3. Moreover, [E] considers a bounded reward function which simplifies their analysis, but is not a practical assumption for many queueing examples. Further, for bounded reward settings with discounting, the assumption of a stable optimal policy with a Lyapunov function (as in [E]) is extremely restrictive: e.g., if the rewards increase to a bounded value as the state goes to infinity, then the stationary discount-cost optimal policy will likely be unstable as the goal will be to increase the state as much as possible. Additionally, bounded costs for average cost problems need strong state-independent recurrence conditions for the existence of (stationary) optimal solutions, which many queueing examples don't satisfy: see [C] for necessary conditions, plus [D] shows that a stationary average cost optimal policy may not exist. Finally, we are not aware of any other RL algorithms with provable low regret for the average cost problem in the countably infinite setting with an unknown model. ___ [A] Sennott, L l. Average cost optimal stationary policies in infinite state Markov decision processes with unbounded costs. Operations Research 37.4 (1989): 626-633. [B] Cavazos-Cadena, R., Sennott, L. I. Comparing recent assumptions for the existence of average optimal stationary policies. Operations Research Letters 11.1 (1992): 33-37. [C] Cavazos-Cadena, R. Necessary conditions for the optimality equation in average-reward Markov decision processes. Applied Mathematics and Optimization 19.1 (1989): 97-112. [D] Fisher, L., Ross, S. M. An example in denumerable decision processes. The Annals of Mathematical Statistics 39.2 (1968): 674-675. [E] Shah, D., Xie, Q., Xu, Z. Stable reinforcement learning with unbounded state space. arXiv preprint arXiv:2006.04353 (2020).
Summary: The authors consider the average reward Markov decision process framework with countable state spaces. The considered objective is to perform closed-loop optimal control for a family of MDPs parameterized in a compact space; this is a particularly interesting setting as the cost function is not assumed to be bounded. The authors propose a Thompson Sampling based algorithm and analyze its regret under suitable assumptions. They are able to show a finite time $\sqrt{|\mathcal{A}| T}$ bayesian regret bound. The authors also show the practical significance of their algorithm by an empirical application to queuing models. Strengths: - This paper is very relevant to the community as it opens the way towards studying RL in continuous spaces without the assumptions of bounded reward functions. Therefore tightening the connection of reinforcement learning theory and optimal control. - Although certain assumptions are somehow stringent, it is nice that the paper is able to convert an infinite horizon setting to a somewhat episodic setting. The latter usually allow for simpler analyses. - The application to a queuing model is also appreciated, a nice change from the standard RL benchmarks. - I would like to clarify that I did not go through the theoretical proofs and can therefore not comment on their correctness apart from the general intuition that such results should be possible given appropriate assumptions on stability and dynamics. Weaknesses: - The assumptions seem somewhat stringent. For example, the finite support of transitions can be challenged even in the specific example of queuing models when large amounts of arrivals are possible. More importantly, assumption 3 seems quite limiting, can the authors elaborate on what this implies for stability? I am not very knowledgeable in optimal control but it seems that you are assuming stability of all policies? - The paper is sometimes very dense and not straightforward to follow. For example, lines 173 to 192 requires prior knowledge of several papers and definitions, e.g. Poisson equation’s link to the problem at hand/ forcing function and other similar passages in the text. I would advise the authors to add the relevant definitions and Lemmas at least in the appendix to make the paper self contained. - The paper fails to cite many RL works in the continuous spaces setting. Namely, the entire line of function approximation in RL, for example: 1) “Frequentist Regret Bounds for Randomized Least-Squares Value Iteration” by Zanette et al. which provides a algorithm based on Thompson Sampling as well, and many other works (see references therein) in the model-free paradigm 2) “Bilinear Exponential Family of MDPs: Frequentist Regret Bound with Tractable Exploration and Planning” which also provides similar algorithms for MDPs that seem to include the queuing models presented here, see references therein for model-based approaches to RL in continuous spaces. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In line 203, where is $\pi_{\theta_k}^*$ defined? How is this implemented in the examples? - From your understanding of your assumptions, is the stability ensured by default? Or does Thompson sampling just ensure it by some magic in the proof? If the latter, and since you have a bound on the maximum $\ell_\infty$ norm of the states, is it not more straightforward to use existing RL algorithms? I guess what I’m asking is how necessary the Lyapunov-type analysis is in this case? - Why aren’t there comparisons with other RL algorithms in the experiments? especially since you implement PPO in the second model for your experiment, isn’t it possible to compare directly against it and other deep RL methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The proposed work and algorithms is theoretical and does direct societal impacts. The possible theoretical limitations are detailed to some extent in the paper and to be addressed in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review and references (which we will cite). We will add related definitions in the appendix of the final version. The references depicted with a number indicate a reference in the submission, whereas the references marked with a letter are listed at the end of the response. >Weakness 1. Necessity of assumptions. A: The state-dependent support of the transition kernel is finite for every state-action pair, but we do not assume that the supremum of the number of possible transitions is finite over the state-space. Many queueing models fit this setting-see our examples at Section 5 and reference [52] (arrivals have an explicit bound). Generalizing to allow for an arbitrarily large number of arrivals at any instance will necessitate adjustments to our proof, but with reasonable assumptions (like finite moment generating function), we expect the results to continue to hold. As noted in Appendix A.1, Assumption 3 imposes stability criterion---geometric ergodicity---that applies uniformly across policies, models, and states: for any $\theta_1,\theta_2 \in \Theta$, sequence $\{(P_{\theta_1}^{\pi^*_{\theta_2}})^n\}$ converges geometrically fast to the stationary distribution $\mu_{\theta_1,\theta_2}$, where $P_{\theta_1}^{\pi^*_{\theta_2}}$ is the transition kernel of the Markov process obtained from the MDP $\left(\mathcal{X},\mathcal{A},c,P_{\theta_1}\right)$ by following policy $\pi^*_{\theta_2}$. This is a stronger compared to positive recurrence-beyond the existence of a stationary distribution, it enforces a geometric convergence rate to the stationary distribution. In our context, we assume this convergence for all optimal policies $\pi^*_{\theta}$ corresponding to some $\theta \in \Theta$. We need this assumption to upper bound the first $r+1$ moments of the maximum state norms and hitting times of state $0^d$. At present, we don't know if we can weaken the requirement from geometric ergodicity and to polynomial ergodicity (see Assumption 4). Please see Remark 1 in the global response for more discussion on the importance of stability. > Weakness 3. Citation of RL works Zanette et al. [A] & Ouhamma et al. B]. A: We differentiate our work from [A,B] as follows. Both works [A,B] consider a finite-horizon problem. In contrast, our work considers an average cost problem which is an infinite-horizon setting, and provides finite-time performance guarantees; asymptotic optimality of our algorithm is then immediate. In addition, [A] studies an MDP with a bounded reward function. Our focus, however, is learning in MDPs with unbounded rewards with the goal of covering practical examples from queueing systems. The reviewer is correct that parameterization of transition kernels used in [B] and the prior work [C], can be used within the framework of our problem. However, similar to our work, additional assumptions, importantly stability conditions proposed in our problem, are necessary to guarantee asymptotic learning and sub-linear regret. As there aren't general necessary and sufficient conditions on the parameters to ensure stability, posterior updates can be complicated with this parameterization. Another issue with exponential families of transition kernels is that they do not allow for $0$ entries (except through parameters increasing without bound), and so will not be directly applicable to queueing models (like our examples). >Question 1. Definition of $\pi^*_{\theta_k}$. A: In Assumption 3, we have defined $\pi^*_{\theta_k}$ as the unique optimal policy that minimizes the infinite-horizon average cost for MDP $\left(\mathcal{X},\mathcal{A},c,P_{\theta_k}\right)$ within the policy class $\Pi$; Thm 1 considers all policies, and Cor 1 uses a subset of all policies. Here, $\theta_k$ is the sample generated from the posterior distribution at the beginning of episode $k$. In the queueing model of Figure 2a (illustrating Thm 1), $\Pi$ is the set of all policies, and the optimal policy corresponding to any parameter $\theta$ in $\Pi$ is explicitly characterized in [29], which is a threshold policy with an explicitly determined (finite) threshold. In the queueing model of Figure 2b (illustrating Cor 1), we find the average cost-minimizing policy within a subset of weighted Max-Weight policies, which route arrivals based on weighted queue lengths. The optimal policy even in this set is not known except when $\theta_1=\theta_2$ where the optimal value is ${\omega}=1$, and so, to learn it, we use Proximal Policy Optimization (PPO) for countable state-space MDPs [13]; see Figure 5b in Appendix. Hence, in both cases, the optimal oracle can be implemented. >Question 2. Necessity of stability. A: Please see Remark 1 in the global response. >Question 3. Comparisons with other RL algorithms. A: The PPO algorithm of [13] \textbf{requires} model knowledge and finds the optimal average-cost policy using PPO for an MDP with known transition kernels. In our second queueing model in Figure 2b, we have used the PPO algorithm to find the best in-class policy (illustrating Cor 1), utilized in Line 7 of Algorithm 1 after realizing parameter $\theta^*_k$ from the posterior distribution. We are not aware of any RL algorithms with provable low regret in the setting of countably infinite spaces with an unknown model. ___ [A] Zanette, A., et al. Frequentist regret bounds for randomized least-squares value iteration. International Conference on Artificial Intelligence and Statistics. PMLR, 2020. [B] Ouhamma, R., Debabrota B., and Odalric M. Bilinear Exponential Family of MDPs: Frequentist Regret Bound with Tractable Exploration & Planning. Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023. [C] Chowdhury, S. R., Gopalan, A., & Maillard, O. A. Reinforcement learning in parametric MDPs with exponential families. In International Conference on Artificial Intelligence and Statistics, pages 1855–1863. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their clear answer. I understand the necessity of the stability assumptions, but I still do not see how, given these assumptions, a standard RL algorithm with a high probability analysis wouldn't work just as fine. You seem to have an explicit bound on the maximum norm after $T$ time steps and this can be directly injected into the traditional analyses, am I wrong? Also, I still think authors should make a significant addition to the presentation to address Weakness 2 that I raised. Finally, for the empirical evaluation, what I meant is not to compare necessarily to some theoretically studied algorithm, but rather just the standard deep-RL algorithms, otherwise there is no baseline to also judge the empirical relevance. I believe that with minor improvements, the paper could be a good addition to the conference. Therefore, I would like to keep my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback. 1. The skip-free to the right property in Assumption 2 yields a polynomially-sized subset of the underlying state-space depending on the time-horizon $T$, specifically $S(T)=(hT)^d$. This polynomially-sized subset can be viewed as the effective finite-size of the system in the worst-case, and then, directly applying finite-state problem bounds (e.g., by using [38]) would result in a regret of order $\tilde O(S(T) \sqrt{ |\mathcal{A}| T})$, which is essentially $\tilde O(T^{d+0.5})$; since $d\geq 1$, such a coarse bound is not helpful even for asserting asymptotic optimality. Thus, to achieve a regret of $\tilde O(\sqrt{T})$, it is essential to carefully understand and characterize the distribution of $M^T_{\boldsymbol \theta^*}$ and then its moments; see Remark 1 in Appendix B. Furthermore, for the truncation plus standard RL algorithm, we need to add the error/regret due to the truncation of the state space to the regret term $\tilde O(S(T) \sqrt{ |\mathcal{A}| T})$. As the stationary distribution of the optimal policy (in most examples) does ascribe non-zero probability to states outside the truncated finite space $S(T)$, the error here will depend on ergodicity properties—most likely decreasing to zero fast in $T$ with geometric ergodicity and much slower with polynomial ergodicity. Further, such a scheme works with a fixed $T$, so either the horizon needs to be fixed ahead of time or the doubling-trick needs to be used. In essence, more care is likely needed to use a truncation scheme. 2. We will address weakness 2 in our final version and add related definitions to the appendix. 3. The comparison with a standard deep RL scheme is a future direction, but it would be challenging to determine uniformly suitable hyperparameters throughout the parameter space.
Rebuttal 1: Rebuttal: Below we address questions and remarks to common questions. >Remark 1. The necessity of stability assumptions. Stability needs to be imposed separately, so we use Assumption 3. This is due to the following reasons. In contrast to finite-state MDPs, where stability and existence of a stationary distribution are easily assured-via irreducibility or the existence of a single recurrent class, and aperiodicity-, the countable state-space setting needs additional conditions to ensure that the Markov process resulting from using any stationary policy $\pi \in \Pi$ is positive recurrent or ergodic. Furthermore, recovering after either using an unstable policy or starting in a transient state can be problematic, and with a countable set of transient states $S^{\mathrm{tr}}$, the expected time to exit $S^{\mathrm{tr}}$ can be infinite. See [A,B] for more discussion on importance of stability. To analyze the regret, we use the independence structure resulting from recurrent visits to state $0^d$ (with inter-visit durations being well-behaved), which may not occur without stability assumptions. Furthermore, to characterize the regret, we also require bounds on the moments of the (random) maximum state norm $l_\infty$ reached by time $T$ and on the hitting time to state $0^d$, for which we used the Lyapunov functions of Assumptions 3 & 4. Reference [13] also uses Lyapunov function-based arguments for finding the average cost optimal policy in countable-state MDPs with \textbf{known} transition kernels. The authors impose geometric ergodicity and utilize Lyapunov function-based arguments to analyze their proposed PPO policy's performance. To further clarify the necessity of stability assumptions, consider the queueing model in Figure 2a (illustrating Thm 1), which has a set of countable transient states $S^{\mathrm{tr}}$-all states where the second server is occupied above the threshold. Our algorithm avoids this set (or other transient states) by always remaining within the (policy-dependent) reachable set of states from $0^d$ (which are positive recurrent by the stability assumptions). Finally, the stability assumption gives probabilistic control on the random $l_\infty$ and number of episodes, both of which are crucial to the result. Just using the skip-free to right property yields a state-space (plus memory use, and regret) bound, i.e., $l_\infty$, that is exponential in the time-horizon $T$, so directly using RL algorithms and state-of-the-art results for such algorithms will yield too high a regret bound (one that scales exponentially in $T$). >Remark 2. Requirement of an optimal policy oracle. To implement our algorithm, when we determine regret with-respect-to an optimal policy, we necessarily need to find the optimal policy for each model sampled by the algorithm-optimal policy for Thm 1 and optimal within policy class for Cor 1; this has also been used in past work [17,18,27]. In the finite state-space setting, [38] provide a schedule of $\epsilon$ values to select $\epsilon$-optimal policies such that $\tilde{O}(\sqrt{T})$ regret results. The issue with extending the analysis of [38] to the countable state-space setting is that we need to formulate (and verify) ergodicity assumptions for a potentially large set of close-to-optimal algorithms whose structure is undetermined. Another issue is that, to the best of our knowledge, there isn't a general structural characterization of all $\epsilon$-optimal stationary policies for countable state-space MDPs, or even a characterization of the policy within this set that is selected by any computational procedure in the literature; current results only discuss existence and characterization of the stationary optimal policy. In the absence of such results, stability assumptions with the same uniformity across models as in our submission, will be needed. At present we don't know how to verify such an assumption for any example, but nevertheless the conditions required are likely to be too strong to be useful. If we could verify the stability requirements of Assumptions 3 & 4 for a subset of policies, the optimal oracle is not needed, and instead by choosing approximately optimal policies within this subset, we can follows the same proof steps as [38] to guarantee regret performance similar to Corollary 1 (without knowledge of model parameters). For instance, in the queueing model of Figure 2b, if instead of the optimal weight, we have access to an (performance-wise) $\epsilon$-optimal weight policy, we can easily see that the approximate policy satisfies the stability assumptions and the sub-linear regret bounds carry through following arguments similar to those in [38], section 3.2. We will add this extension to the final version. >Remark 3. Dependence of expected regret on problem parameters. We would like to clarify that our expected regret depends on the skip-free parameter defined in Assumption 2, $h$, and the dimension of the state space, $d$, the cost function parameters defined in Assumption 1, $K$ and $r$ , supremum on the optimal cost $J^*$, and $r^p_*$ that is defined in Assumption 4 as $\tilde O(Kr\ d\ J^*\ h^{d+2r+r_*^p}\ \sqrt{|\mathcal A| T })$, where $\tilde O$ hides logarithmic factors in problem parameters-one of which is $\log^{d+r+r_*^p+2}(T)$. For simplicity, we have not included the Lyapunov functions related parameters in the regret, but we will add the order accounting for the parameters associated with the problem structure to our final version. ___ [A] Sennott, Linn I. Average cost optimal stationary policies in infinite state Markov decision processes with unbounded costs. Operations Research 37.4 (1989): 626-633. [B] Cavazos-Cadena, Rolando, and Linn I. Sennott. Comparing recent assumptions for the existence of average optimal stationary policies. Operations Research Letters 11.1 (1992): 33-37.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Supervised Visual Acoustic Matching
Accept (poster)
Summary: The authors proposed a self-supervised approach to match acoustic conditions via visual information without the need of paired audio-visual data. With this approach, the in-the-wild web data or simulated data can be utilized. They show that this approach can outperforms the state-of-the-art on multiple datasets. Strengths: - The proposed approach is novel in several aspects, 1. does not require paired audio-visual data, 2. Able to leverage more data from the web or from simulation. - The authors provide human perception study, which is important for their claims in perception accuracies. - This technique has potential applications in various audio domains such as domain adaptation, help improving in-the-wild acoustic event classification, etc. Weaknesses: - The writing of this paper is not easy to follow and missing content, for example in line 200 it refers to Section 4 for details in "RT60 to allow generalization", which does not existing in Section 4, mostly focusing on datasets. In line 226, the off-the-self dereverberator is referred to Section 5, which is difficult to locate for the readers. - In Section 5, for all the implementation details, suggest the authors to organize and highlight with a table, what data is used to train in which step. Currently it is not easy to follow in the scripts and have clear understanding of the experimental designs. - The paragraph of line 188 seems to appear in the wrong place, referred Figure 4 is in the results section, while the narrative appears in the approach section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The use of RT60 needs more explanation, is the intuition to model just the acoustic environment and disentangled with the content involved? - Why is Table 1 on the far right, train with AVSpeech-Rooms and test with LibriSpeech does not include comparisons of STFT? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - No potential social or ethical implications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the valuable feedback.** **Weaknesses:** **1) The writing of this paper is not easy to follow and missing content, for example in line 200 it refers to Section 4 for details in "RT60 to allow generalization", which does not existing in Section 4, mostly focusing on datasets. In line 226, the off-the-self dereverberator is referred to Section 5, which is difficult to locate for the readers.** A: Thank you for pointing out the error in line 200; this has been corrected. Line 226 is on page 6, Section 5 (“Experiments”) can be found on the next page. The second paragraph describes the off-the-shelf dereverberator. Clicking on the section reference also directs here. We respectfully point out that the other 4 reviewers rate our Presentation clarity as Good or Excellent. **2) It is not easy to follow in the scripts and have clear understanding of the experimental designs. In Section 5, for all the implementation details, suggest the authors to organize and highlight with a table, what data is used to train in which step.** A: We thank the reviewer for their suggestion. We have added Table 7 in the rebuttal pdf which displays the training steps for each dataset in concise form. **3) The paragraph of line 188 seems to appear in the wrong place, referred Figure 4 is in the results section, while the narrative appears in the approach section.** A: We placed this paragraph with references to Figure 4 immediately after introducing the concept of de-biasing in order to help the reader intuitively grasp what the de-biaser is doing to audio visually, before they progress with the rest of the method. **Questions:** **1) The use of RT60 needs more explanation, is the intuition to model just the acoustic environment and disentangled with the content involved?** A: We describe our reasoning for using RT60 in the paragraph at line 194. RT60 is a content-invariant measure of room reverberation, and is a function of room geometry and surface material absorption and reflection properties that characterize a room. These features make RT60 the optimal metric for our task, where we are evaluating whether two waveforms with potentially mismatched content audio sound as if they were recorded in the room. The metric is used frequently in the literature, and particularly for evaluating acoustic matching quality in prior work [35,23,19]. **2) Why is Table 1 on the far right, train with AVSpeech-Rooms and test with LibriSpeech does not include comparisons of STFT?** A: STFT is not applicable for that experiment, as explained in Line 291-293. The LibriSpeech experiment (Table 1, last two columns) performs visual acoustic matching using anechoic source speech samples from the LibriSpeech dataset and real-world target images from AVSpeech-Rooms. Thus the predicted and ground truth reverberant output in this experiment have different speech content, so the STFT metric is not applicable. Applied here, STFT would capture differences in audio *content* (which by definition must be different here) instead of measuring error in acoustic properties alone. Hence, for this setting we report errors in the reverberant properties of the audio (RT60 error statistics), as well perceptual accuracy via the human user study. --- Rebuttal Comment 1.1: Comment: Thank to the authors for your explanations and clarifications. Please include Table 7 from the rebuttal in the final paper if possible. I am going to change the rating.
Summary: The paper introduces a self-supervised method for visual acoustic matching (VAM), where the training samples consist of only the target scene image and audio. The paper is well-written and easy to follow. The experimental results have validated the effectiveness of the proposed approach. We have the following comments: (1) The proposed system is somewhat simple as it incorporates existing methods. Therefore, the scientific depth presented in the paper may not be up to the standards presented at the prestigious conference NeurIPS. (2) It would be beneficial to include an analysis of the dereverberator. The authors can evaluate the dereverberator's performance using SRMR with different levels of reverberation. It would also be interesting to see the results with and without the De-biaser component based on SRMR. (3) Please provide results for environments with mild, moderate, and severe reverberant conditions. This will help demonstrate the effectiveness of the LeMARA approach more clearly. (4) It is crucial to showcase the impact of the De-biaser component. In addition to the spectrograms presented in Figure 4, I recommend that the authors perform additional experiments using supplementary metrics like SRMR or a readily available ASR model. This will provide further evidence and insight into the effectiveness of the De-biaser approach. (5) It would be informative to report the results of the LeMARA approach separately for seen and unseen images. (6) Figure 4 shows that LeMARA exhibits greater variation in RT60 compared to AViTAR. Please provide an explanation for this observation. (7) We suggest preparing an anonymous website where several sets of sound samples of the source, target, AViTAR, and LeMARA can be presented. (8) Figure 5 indicates that AViTAR outperforms LeMARA in the left-down example. Please provide an explanation for this result. Strengths: The paper introduces a self-supervised method for visual acoustic matching (VAM), where the training samples consist of only the target scene image and audio. The paper is well-written and easy to follow. The relevance of the research task holds significant value in AR/VR systems and can serve as a crucial element. The experimental results have validated the effectiveness of the proposed approach. Weaknesses: Some analyses, which can further demonstrate the effectiveness of the proposed approach, are missing, for example: (1) The authors can evaluate the dereverberator's performance using SRMR with different levels of reverberation. It would also be interesting to see the results with and without the De-biaser component based on SRMR. (2) Please provide results for environments with mild, moderate, and severe reverberant conditions. This will help demonstrate the effectiveness of the LeMARA approach more clearly. (3) It is crucial to showcase the impact of the De-biaser component. In addition to the spectrograms presented in Figure 4, I recommend that the authors perform additional experiments using supplementary metrics like SRMR or a readily available ASR model. This will provide further evidence and insight into the effectiveness of the De-biaser approach. (4) It would be informative to report the results of the LeMARA approach separately for seen and unseen images. (5) In Line 30, there is one reference missing. (6) In Fig. 2(b), D(G(A_T)) should be D(G(A_t)). Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) A critical point is to showcase the effectiveness of the De-biaser component. Please provide additional experiments to demonstrate its performance. (2) It s interesting to see the achievable performance of LeMARA approach under mild, moderate, and severe reverberant conditions. (3) It is crucial to report the results of the LeMARA approach separately for seen and unseen images. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: (1) The proposed system is somewhat simple as it incorporates existing methods. Therefore, the scientific depth presented in the paper may not be up to the standards presented at the prestigious conference NeurIPS. (2) Figure 4 shows that LeMARA exhibits greater variation in RT60 compared to AViTAR. Please provide an explanation for this observation. (3) Figure 5 indicates that AViTAR outperforms LeMARA in the left-down example. Please provide an explanation for this result. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the valuable feedback and positive remarks.** **Weaknesses:** **1) The authors can evaluate the dereverberator's performance using SRMR with different levels of reverberation. It would also be interesting to see the results with and without the De-biaser component based on SRMR.** A: Thanks for the suggestion. This is easy to add to our analysis. Figure 7 and Table 4 in the rebuttal pdf shows the SRMR score of dereverberated and de-biased data at various levels of reverberation. Overall, the de-biaser produces audio that is significantly cleaner and more anechoic than de-reverberated data. **2) Please provide results for environments with mild, moderate, and severe reverberant conditions.** A: Figure 8 in the rebuttal pdf shows Relative RT60 Error across “bins'' of increasing reverberation. Relative RTE generally increases with reverberation across both datasets, likely because highly reverberant audio contains strong residual acoustic clues that may be difficult to completely remove, whereas this is easier in audio that has less reverberation to begin with. On AVSpeech-Rooms and LibriSpeech generalization, LeMARA achieves lower RTE than the baselines even in this high reverberation regime, due to the de-biaser’s ability to strip away even strong residual acoustic clues in audio. **3) It is crucial to showcase the impact of the De-biaser component.** A: Our results focus on the correctness of the reverberator outputs (Table 1 in the main paper, Tables 5 and 6 in rebuttal pdf), given that the task is visual acoustic matching, i.e., re-synthesizing reverberation for a new environment. In particular, rows 3 and 5 in Table 1 of the main paper pinpoint the impact of the de-biaser. Furthermore, our ablations table in Supp. confirms the effectiveness of our novel de-biaser. However, we appreciate that the reviewer is interested in understanding how good the “under the hood” de-biaser module is itself. To that end, Table 4 in the rebuttal pdf highlights the impact of de-biasing on speech quality and reverberation across a variety of metrics, including SRMR. Figure 13 shows the distribution of SRMR scores for reverberant, dereverberated, and de-biased audio. Especially on real-world data (AVSpeech-Rooms), we observe that de-biased data has significantly better quality than dereverberated data. **4) Informative to report the results of the LeMARA approach separately for seen and unseen images.** A: Thanks for raising this point. We report results on seen data in Table 6 in the rebuttal pdf. Our LeMARA method outperforms baselines on a variety of RT60 and STFT based metrics across both datasets. Please refer to Table 1 (main paper) and Table 5 (in rebuttal pdf) as well Lines 305-327 for the results and original analysis on unseen data.We reported test-unseen in the main paper as this is the more robust evaluation of performance and due to space constraints. **5) In Line 30, there is one reference missing.** **6)In Fig. 2(b), D(G(A_T)) should be D(G(A_t)).** Thank you, fixed. **Questions:** **1) Critical to showcase the impact of the De-biaser component.** A: Please see our response for **Weaknesses Question 3**. **2) Interesting to see the achievable performance of LeMARA approach under mild, moderate, and severe reverberant conditions.** A: Please see our response for **Weaknesses Question 2**. **3) Crucial to report the results of the LeMARA approach separately for seen and unseen images.** A: Please see our response for **Weaknesses Question 4**. **Limitations:** **1) Proposed system is somewhat simple as it incorporates existing methods.** A: Existing approaches for VAM [3] use an off-the-shelf model dereverberator to pre-process speech before acoustic matching. In contrast, we propose a mutual-learning based approach that cyclically optimizes a de-biaser to strip away acoustics and a reverberator to add these acoustics back in. We construct a time-domain GAN model and novel adversarial training objective (Acoustic Residue) which assigns a scalar value to the amount of residual acoustic information in an audio clip. We build a dual-WaveNet model to represent the Acoustic Residue function (see Figure 2c), consisting of a blind WaveNet model and a visual-conditioned WaveNet with RT60 estimation network heads, which are used to compute the Acoustic Residue loss signal. Finally, we also devise a novel training strategy for our GAN (Line 209) that uses the Acoustic Residue signal to not only optimize the generator component as in a traditional GAN, but to also update the Acoustic Residue networks themselves, mitigating the distribution shift that occurs in generated data over the course of GAN training. Please refer to Lines 158, 209, 130, and 120 for details on these novel elements. **2) Figure 4 shows that LeMARA exhibits greater variation in RT60 compared to AViTAR. Please provide an explanation.** A: This is an important attribute of our approach. Both AViTAR [3] and ViGAS [5] perform VAM on dereverberated audio that has residual acoustic reverberation. Thus they learn to add in less reverberation than would be necessary if training on true anechoic data. At test time, however, this leads to under-reverberation, as seen prominently in Figure 11 in the rebuttal pdf. In contrast, our LeMARA is trained on de-biased audio that has been adversarially optimized to strip away reverberation (Lines 56-59). Given this pseudo-anechoic audio and the natural variation of RT60 values in our training data, the reverberator in LeMARA correctly learns to add a wider variation of reverberation levels into the audio, conditioned on the image. **3) Figure 5 indicates that AViTAR outperforms LeMARA in the left-down example. Please provide an explanation.** A: We speculate that the irregular room shape along with camera lens distortion makes the room appear artificially small, leading the model to under-reverberate the audio (Lines 340-342). --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: The authors have well addressed our concerns. We decided to raise our score.
Summary: This paper addresses the task of "visual acoustic matching" (VAM): taking a source audio clip and target visual environment (i.e. an image), and modifying the source audio clip such that it sounds like the clip was recorded in the target environment. The paper proposes a self-supervised approach for training neural networks to solve this task, which can train on examples that only contain the target audio and target visual environment. The idea is to disentangle the room acoustics from the audio content, such that the source room acoustics can be removed, then an audio-visual model can transform this audio to match the target visual environment. The method uses a conditional GAN framework, where an "acoustic residue" metric is defined that measures the discrepancy between audio-only and audio-visual reverberation. The acoustic residue metric is used to train a discriminator that can be used to train a "de-biaser" generator model that strips additional reverberation from an initially dereverberated signal. The output of the de-biaser can then be fed to the audio-visual reverberator. Although the method can theoretically be applied to other types of signals, the paper focuses on clean speech as the audio signals. Datasets used are SoundSpaces-Speech (reverb impulse responses (RIRs) simulated from Matterport3D scans of homes with 3D-rendered human at source location and audio from LibriSpeech) and AVSpeech-Rooms (YouTube videos that mostly feature a single speaker with little background noise). Objective metrics are used to measure performance: MSE between magnitude spectrograms of predicted and ground-truth speech, and MSE between RT60 estimates of predicted and ground-truth speech. The proposed method achieves better objective scores compared to comparable methods. A perceptual study is also done, which shows users could identify the room from the proposed method 46.1% of the time, versus 34.7% for a baseline method. Strengths: **S1)** The method provides a means of training self-supervised models for VAM. **S2)** Evaluation includes a human perceptual study. **S3)** Demo video is clear and helps with understanding the method. Also, thanks for providing audio demos, they are very useful for evaluating the method and how it compares to the AViTAR baseline. Weaknesses: **W1)** The evaluation of the method is weak. Watching the demo video, it seems like this is certainly a difficult task to evaluate, since the effect can be rather subtle. Nevertheless, I think the evaluation metrics could be improved. First, there are better options than just measuring MSE on magnitude spectrograms. Human hearing is logarithmic, so MSE on linear magnitude is a poor match to human perception. An easy alternative is to measure MSE between log spectrograms, although this encounters an issue with where to set the floor on the log to avoid -infinity. A solution to this is to use magnitude raised to a power (e.g. 0.3), which approximates the log, but also goes to 0 where the magnitude is 0. This may help this this issue: "Models that use this dereverberator without further de-biasing will display artificially low STFT error when evaluated in-dataset" Second, there are other important properties of reverb impulse responses besides RT60, such as direct-to-reverberant ratio (DRR). DRR measures the ratio of energy of the direct path to the energy of the reverberant part, and can be an important cue for distance of the source. Also, distance of the source is not really discussed, and DRR is a crucial property to help measure this (see W3). Third, the comparison of the RT60 distributions in Figure 4 are not very convincing, because they are just box plots, and it's not clear how the proposed approach "more closely matches the ground truth target distribution.": the median is closer to ground-truth than the baseline, but it seems like a different type of plot would allow more detailed comparison of the distributions, like histograms or violin plots. **W2)** The paper restricts its focus to a clean single speaker, which suggests the method may be limited on real-world data, which can contain a great variety of non-speech sounds, and also multiple speakers. I don't think the proposed method could be applied directly to these scenarios, because all sounds would need to first be separated, then each processed by the pipeline (i.e. each sound dereverberated and re-reverberated) **W3)** The paper does not discuss the effect of distance of an object, nor the effect of distance on the reverberation. In particular, the DRR is an important property of reverberation that is totally ignored in this paper. At the very least, assumptions about how the method handles distance and location of sources needs to be clearly described and discussed, and also I think evaluation metrics should take distance into account (see W1). **Minor comments and typos** a) "data, we" -> "data, and we" b) "target space [?,..": update missing ref c) "In-the-wild Web" -> "In-the-wild web" d) "We focus on human speech in indoor settings": clearly specify that it's clean speech, without background noise e) "this leaves signals of the target environment": I think "residuals" would be better than "signals" f) "Unlike SRMR, DNSMOS [31], or any existing off-the-shelf metric that quantifies dereverberation,": I don't think DNS-MOS is trained to quantify reverberation specifically, it's overall quality, which can be affected by other properties such as background noise and/or artifacts. May be good to adjust this description Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) Looking at the spectrograms of de-biased audio in Figure 4 and listening in the demo video, de-biased audio seems like it is applying fairly aggressive suppression, perhaps removing some energy of the anechoic audio. This seems like a weakness of the method. Are there mechanisms to prevent over-suppression of the anechoic audio? Does this suggest a trade-off for the de-biaser between dereverberating and suppressing signal? Some discussion of this would be helpful. Q2) In Figure 2, where does "A" come from (lower right panel)? I guess this is the de-biased audio? Would be good to make this more clear Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations described adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the valuable feedback and questions. We hope that our clarifications about the dataset contents and incorporation of the additional suggested metrics help reconsider our contribution.** **Weaknesses:** **W1)** **I) There are better options than just measuring MSE on magnitude spectrograms. An easy alternative is to measure MSE between log spectrograms** A: Thank you for this suggestion. We have added in log magnitude STFT loss as an additional metric in Tables 5, and 6 in the rebuttal pdf. We again outperform the current SOTA approach (AViTAR [3]) on this metric across both datasets. **II) There are other important properties of reverb impulse responses besides RT60, such as direct-to-reverberant ratio (DRR). …does not discuss the effect of distance…** We discuss the speaker-position dependent nature of RIRs and how this is addressed in SoundSpaces-Speech data creation at Lines 247-255. DRR estimation from reverberant audio is challenging. Spectrograms mix both direct and reverberant sound components across frequency and time, which makes it difficult to estimate DRR even if pre-training the model on synthetic audio. Spectrogram-based RT60 estimation is more reliable, as it can be accurately estimated from energy reduction across temporal bins on the spectrogram. If we have misunderstood the suggestion, we’d welcome a pointer on how to allow DRR as a reliable metric when one lacks ground truth RIRs. To illustrate the effect of speaker distance on performance, we provide an STFT Error vs speaker distance plot, Figure 16 in the rebuttal pdf (for SoundSpaces, where we have GT RIRs). Overall, we observe minimal change in error as a function of speaker distance. To provide an analysis of DRR on our data, Figure 15 displays the relationship between speaker-distance and DRR on samples from SoundSpaces-Speech. We observe that DRR generally decreases as a function of speaker distance. **III) RT60 distributions in Figure 4 are not very convincing, because they are just box plots…a different type of plot would allow more detailed comparison of the distributions, like histograms or violin plots.** A: Thank you for the valuable suggestion for how to best present the output distributions. We now provide histogram and violin plots of the source, predicted, and target RT60 distributions (Figures 9-12 in rebuttal pdf). We believe this is indeed a more direct way to see that the proposed approach more closely matches the target distribution on all three setups, compared to our original boxplots. **W2)** **The paper restricts focus to a clean single speaker, which suggests the method may be limited on real-world data, which can contain a great variety of non-speech sounds, and also multiple speakers.** A: Many real-world videos consist of a single speaker (instructional videos, presentations, vlogs).The AVSpeech dataset from which our training set is sampled consists of over 290k such videos from YouTube. Although these videos are single speaker, we disagree with the characterization as little background noise. The clips contain a variety of non-speech background sounds (e.g. white noise from air conditioning, clicking/tapping noises from object interactions, background music). Our model performs well in this real-world setting, as shown by the results on AVSpeech-Rooms in Tables 1 (main paper), 4,5, and 6 (rebuttal pdf) in which we consistently outperform the SOTA. Conceptually our approach does not make any assumptions about the type of audio—whether speech or single speaker (see Line 67, footnote 2). Given a dereverberator and RT60 estimator trained on the relevant types of sounds (mixtures of speakers and/or non-speech sounds), our approach can be used to train a VAM model directly on these mixtures. In principle, audio source separation for sources close by in the same environment would not be necessary; they would all be influenced by the same room acoustics. We leave such explorations for future work. **W3)** **The paper does not discuss the effect of distance of an object, nor the effect of distance on the reverberation...(see W1)** A: Please refer to our earlier response in **W1 part II**. **Minor comments and typos** A: Thank you, fixed. **Questions:** **Q1) Are there mechanisms to prevent over-suppression of the anechoic audio? Does this suggest a trade-off for the de-biaser between dereverberating and suppressing signal? Some discussion would be helpful.** A: Thank you for the insightful questions. The de-biaser may remove some energy from audio, but the spectrogram-based loss provides a strong signal for the reverberator to add this energy back during fine-tuning. This can be seen in the predicted reverberant audio spectrograms in the demo video. There does exist a trade off between dereverberation and suppression: if the de-biaser's aggressive dereverberation also removes energy from audio, then the reverberator fine-tuned on this data may learn to add more energy back than is necessary, which at test time may hurt STFT error. Optimizing de-biased speech for SRMR helps mitigate this, preserving low frequency modulation energy - speech content - over high frequency modulation energy (reverberant content). Pre-training with SRMR provides the generator with a strong prior for preserving energy associated with speech content in de-biased audio, guarding against indiscriminate energy suppression during Acoustic Residue fine-tuning. Also, energy removal may contribute to larger STFT based error, but it does not affect reverberant properties, which our human user study shows may be more important in evaluating perceptual accuracy (Line 343). **Q2) In Figure 2, where does "A" come from (lower right panel)? I guess this is the de-biased audio? Would be good to make this more clear.** A: “A” is the input to the metric, which can be either the de-biased audio or the reverberant audio. We have updated the figure to more clearly show this. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks to the authors for the detailed responses and rebuttal PDF. > If we have misunderstood the suggestion, we’d welcome a pointer on how to allow DRR as a reliable metric when one lacks ground truth RIRs. I guess I was thinking of blind DRR estimation, which is certainly not perfect, but could provide some insight. A number of blind DRR methods have been proposed that could be used, even from a single microphone. There is this classic Matlab implementation: https://www.mathworks.com/matlabcentral/fileexchange/32752-blind-direct-to-reverberant-energy-ratio-drr-estimation And here is a more recent paper on blind DRR estimation: https://www.isca-speech.org/archive_v0/Interspeech_2020/pdfs/2171.pdf But in any case, I appreciate the additional results on DRR versus distance. > We now provide histogram and violin plots of the source, predicted, and target RT60 distributions (Figures 9-12 in rebuttal pdf) Thanks, that's definitely an improvement. > We have added in log magnitude STFT loss as an additional metric in Tables 5, and 6 in the rebuttal pdf. I would still encourage the authors to consider MSE on magnitude spectrograms raised to the 0.3 power, i.e. $|X|^{0.3}$. The authors have addressed most of my concerns, so I am willing to raise my score.
Summary: This work proposes a method for visual acoustic matching, the task of processing an audio signal so the room acoustics are perceived as originating from within a certain room based on an image. While paired data is generally required for this task, the proposed approach is self-supervised and trained using “in-the-wild” videos of speakers in various rooms. To achieve this, a dereverberation system along with a de-biaser, a visually-conditioned reverberator, and a blind reverberator are used. The de-biaser is trained by optimizing a novel acoustic residue metric that measures the relative difference between the amount of acoustic information in two recordings. While similar to dereverberation, this de-biasing process can be seen as post-processing step that aims to further reduce information in the dereverberated output so the visually conditioned reverberator model must use cues from the supplied image to match the target room acoustics. To use this acoustic residue metric as an objective function, the de-biaser is trained in a MetricGAN fashion where the discriminator is trained to regress the true metric value. Two evaluation datasets are used and results indicate that the proposed method outperforms existing baseline systems. In addition, a human perception study also validates these results. Strengths: 1. The proposed method is original and focuses on combining multiple existing approaches along with a novel acoustic residue metric in order to achieve a system that can be trained without paired data. These approaches include the use of various pretrained submodules, including a dereverberation model and a RT60 estimation model, as well as the MetricGAN approach that enables the use of the acoustic residue metric during optimization. 2. As outlined by the authors, existing approaches are often greatly limited by the availability of paired data. Not only is the proposed approach capable of being trained without paired data, the evaluation indicates superior performance to existing methods. Weaknesses: 1. Some relevant references for work in audio-only acoustic matching are omitted. Works such as [1] are similar in nature but rely on conditioning from an audio signal instead of an image. Major components of the reverberator architecture are similar as well, such as the use of a WaveNet architecture for audio processing. This would be highly relevant in the context of the blind reverberator in this work. Also, other audio only approaches such as [2] would be relevant to include in the context of related work. It would also be relevant to address other GAN-based reverberation models such as [3, 4]. 2. While overall the manuscript is well organized and clear, further refinement with regards to the explanation of the process for training the complete system may help reduce the burden for readers. For example, perhaps an algorithm listing each step of the process and referencing the variables used in the text would be helpful. 3. While the evaluation considers two datasets and appears relatively sound, it is still somewhat difficult to understand the relative improvement afforded by the proposed method in comparison to the baselines. 4. While the authors are commended for including a human perception study, the design of the study may be somewhat flawed, making it difficult to make any conclusion from the results. [1] Su, Jiaqi, Zeyu Jin, and Adam Finkelstein. "Acoustic matching by embedding impulse responses." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. [2] Steinmetz, Christian J., Vamsi Krishna Ithapu, and Paul Calamia. "Filtered noise shaping for time domain room impulse response estimation from reverberant speech." 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2021. [3] Ratnarajah, Anton, Zhenyu Tang, and Dinesh Manocha. "IR-GAN: room impulse response generator for speech augmentation." INTERSPEECH (2021). [4] Ratnarajah, Anton, Zhenyu Tang, and Dinesh Manocha. "Ts-rir: Translated synthetic room impulse responses for speech augmentation." 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2021. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is it fair to call the proposed approach “self-supervised”? While the approach does not require paired data directly, it appears that paired data is indirectly required since pretrained models, such as the RT60 estimator and dereverberation models. In other words, it appears that it would not be possible to train the proposed system in a fully self-supervised manner from scratch, without these pretrained modules, which in fact use labeled/paired data. Instead, it may be more accurate to label the proposed method as one capable of using an “unpaired” dataset instead of self-supervised. Perhaps terms such as “semi-supervised” or “weakly-supervised” are more appropriate? What is the motivation for using the MetricGAN approach to construct a differentiable proxy of the acoustic residue metric $\mathcal{M}$? Based on the text, it appears that the RT60 estimation model $\mathcal{RT}$ is implemented with a neural network and hence differentiable. While (3) involves non-differentiable functions such as the absolute value and $\max$, these are both approximately differentiably, indicating that it may be possible to directly backprop through the metric $\mathcal{M}$. Is this something the authors considered? If so, was the failure of this approach the motivation to use the MetricGAN approach? Some discussion on the motivation for the MetricGAN approach would be beneficial. Line 30: A reference has not been rendered properly “?” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: It could be beneficial to include not only the mean RT60 error (RTE (s) as reported in Table 1, but also the bias and correlation coefficient, which is common practice in RT60 estimation evaluation. This can help to establish how the model is failing especially if the test dataset has a non uniform distribution of RT60 targets. One limitation of the use of the RT60 estimation in the proposed acoustic residue metric is that this measurement appears to consider only the broadband RT60. It is generally accepted that in most rooms the RT60 can vary quite significantly across frequency, for example, with the higher frequencies decaying at a significantly faster rate than the low frequencies (generally due to absorption from the material properties). As a result, this variation of RT60 across frequency often provides significant insight into the character of the room, which is often detectable by a human listener. Therefore, using only the broadband RT60 in the acoustic residue metric may lead to some cues about the bandwise RT60 remaining in the de-biased recording as long as the broadband RT60 is perturbed. Future work could consider a band-wise RT60 estimation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the valuable feedback, insights, and positive remarks.** **Weaknesses:** 1) **Some relevant references for work in audio-only acoustic matching are omitted.** A: Thank you for bringing these audio-only works to our attention. We are happy to cite them. Our discussion of audio-only acoustic matching is in Lines 30-33, and on Line 79 including NAF ([21]). Our discussion on GAN-based enhancement models is in Lines 94-102 including Sergan ([1]) and MetricGAN models ([11,12,13]). (Su et al., ICASSP 2020) is relevant audio-only acoustic matching work, which shares our general conditional WaveNet architecture as does [38,32] and the Acoustic Synthesis module of [5] (i.e., it is not unique to (Su et al., ICASSP 2020)). (Su et al., ICASSP 2020) uses training data in which the same utterances are recorded in different environments, motivating an approach that is quite different from our setting, where every (utterance, environment) pair is unique. This key difference motivates the rest of our architectural innovation (adversarial de-biaser, Acoustic Residue metric, dual-WaveNet reverberator, and mutual learning framework). (Steinmetz et al., IEEE 2021) addresses the related task of RIR generation along the lines of [35,23]. (Ratnarajah, INTERSPEECH 2021) and (Ratnarajah, IEEE 2021) develop approaches for this same task using conventional image-based GANs. [35,23,11,12,13] collectively address RIR generation and audio-based GAN methods which these two works can be grouped with. 2) **While overall the manuscript is well organized and clear, provide refinement ... perhaps an algorithm listing each step.** A: Thank you for this suggestion. We have added an algorithm box in the rebuttal pdf, and a tabular description of our three stage training process. (Figure 14, Table 7). 3) **While the evaluation considers two datasets and appears relatively sound, it is still somewhat difficult to understand the relative improvement afforded by the proposed method in comparison to the baselines.** A: Table 1 highlights the quantitative improvement in our method over baselines in two datasets using standard metrics (and see Tables 5 and 6 in the rebuttal pdf for additional metrics). We also provided a human user study showing our method outperforms the SOTA (line 343). Additionally, our demo video (Supp.) contains examples comparing audio generated by our model and the current SOTA (AViTAR [3]) for reviewers to better understand the perceptual improvement. We believe that this mix of both hard metrics on the one hand and perceptual studies and qualitative examples on the other hand facilitate understanding the relative improvement. If there is a more specific question from the reviewer, we’d be happy to address it. 4) **While the authors are commended for including a human perception study, the design of the study may be somewhat flawed.** A: The setup details start at line 343. We believe this is a solid perception study following good practices. However, if we have missed anything, could the reviewer please indicate exactly what might be somewhat flawed? **Questions:** 1) **Is it fair to call the proposed approach “self-supervised”? Perhaps terms such as “semi-supervised” or “weakly-supervised” are more appropriate?** A: We appreciate this suggestion and we agree there’s room for different interpretations of the terms. To explain our thinking: While the RT60 estimator and off-the-shelf dereverberator require supervision for their training, they are treated as frozen modules in our method. As such, our model can be extended to a new dataset with no additional supervision necessary, as we demonstrated with the application to AV-SpeechRooms. Terms such as “semi-supervised” and “weakly-supervised” imply that the amount of paired data required scales with the amount of full training data, whereas our method requires only the finite amount of training data on which the RT60 estimator and dereverberator modules have been trained. However we appreciate the reviewer’s point, and “unpaired” may be a more appropriate term than “self-supervised” given this ambiguity. 2) **Q: What is the motivation for using the MetricGAN approach to construct a differentiable proxy of the acoustic residue metric? Some discussion would be beneficial.** A: Thanks for the insightful question. While the Acoustic Residue metric is approximately differentiable, our metric optimizes for both Acoustic Residue as well as SRMR (for both reverberation and speech quality), and the implementation of SRMR we used is not differentiable, motivating the use of the MetricGAN. This balance is important as we found that incorporating SRMR into the metric improves performance beyond a pure Acoustic Residue metric (See ablations table in Supp.), and provides stability during reverberator fine-tuning (lines 232-237). Our approach also allows for extensibility to other non-differentiable speech quality scores such as PESQ. 3) **Q: Line 30: A reference has not been rendered properly “?”** A: Thank you. **Limitations:** 1) **It could be beneficial to include not only the mean RT60 error (RTE (s) as reported in Table 1, but also the bias and correlation coefficient.** A: Thank you for the suggestion. In short, we’ve added the requested error metrics and they reinforce our original claims. See Tables 4, 5 and 6 in the rebuttal pdf. 2) **One limitation of the use of the RT60 estimation in the proposed acoustic residue metric is that this measurement appears to consider only the broadband RT60. Future work could consider a band-wise RT60 estimation.** A: We thank the reviewer for their insight. While our results already show consistent gains using the broadband RT60 in the residue metric, we agree it will be interesting future work to explore a band-wise variation. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and additional details provided in the rebuttal document. These results reinforcement the claims made in the paper surrounding the superiority of the proposed approach. As previously argued, the term "self-supervised" is probably not the most accurate due to the required pre-training of the supporting models, which use supervised data. However, this point should not limit the acceptance of this work. I will retain my original score and advocate for acceptance.
Rebuttal 1: Rebuttal: Thanks to all reviewers for their time and valuable feedback. Three reviewers recommend accepting. The other two suggest additional error metrics of interest and ideas for how we plot the results (Reviewer AZuk) and easy-to-address clarifications and items addressed already in the text (Reviewer 2YZ1). We address all items below and show the requested metrics in the rebuttal pdf. Pdf: /pdf/239553961d00be2a997c9fdafae0933c114c3a58.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work aims to perform acoustic matching with unpaired training data, i.e. observing the audio only in the target environment without samples of the same audio in the source environment. The basic process involves the common process of dereverberator and reverberator. But one key point in this work is to introduce an acoustic residue metric to measure the residual information in a waveform to ensure the quality of dereverberated audio. Finally, experiments on SoundSpaces-Speech and AV Speech show its effectiveness. Strengths: The presentation is very clear. The motivation to tackle the task is also intuitive and meaningful. The method is reasonable. I like this smart and simple manner proposed in this paper to tackle the key limitation in visual acoustic matching. Weaknesses: (1) Is it limited to evaluate with only two metrics, STFT and RTE? (2) Although the number of methods using unpaired data may be few, the number of methods using paired data should be large. There could be some comparison with them to show the performance level of the proposed method. For example, comparing on other paired datasets with existing methods which use paired data could show the difference of the methods use and not use paired data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) The error of STFT changes obviously in AVSpeech-Rooms compared with other settings, i.e. about 6~7 vs 1~2, what’s the possible reason? I t would be nice to give the clarification. (2) Two questions about the evaluation metrics and the comparison are as listed in weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the valuable feedback and positive remarks.** 1) **Is it limited to evaluate with only two metrics, STFT and RTE?** A: We focus on RT60 and Spectrogram loss metrics as these capture well the reverberant acoustic properties of audio, and are used in prior works for acoustic matching [3, 23, 35]. Furthermore, we also provide quantitative results of a human user study for evaluation of perceptual quality in the paragraph starting at line 343. Based on the reviewers’ helpful suggestions of additional metrics, we now also provide log magnitude STFT error, RT60 bias/correlation coefficient, and relative RT60 error (Tables 5 and 6 in the rebuttal pdf). We also evaluate the de-biaser and dereverberator using a speech quality metric (SRMR) (Table 4 in the rebuttal pdf). These new metrics continue to support our claims, especially on real-world data. 2) **Although the number of methods using unpaired data may be few, the number of methods using paired data should be large. There could be some comparison with them to show the performance level of the proposed method. For example, comparing on other paired datasets with existing methods which use paired data could show the difference of the methods use and not use paired data.** A: The task of Visual Acoustic Matching (VAM), first introduced in [3], is relatively new, and as such there are very few existing methods, whether using either paired or unpaired data. To the best of our knowledge, we have compared our method against all available relevant works: the original and only Visual Acoustic Matching paper [3], as well as ViGAS [5], a model designed for the related task of novel-view acoustic synthesis that we adapt to for VAM (paragraph at line 294). We also evaluate audio-only variants of these models for comparison against our audio-only model. In addition to these, we evaluate a naive approach in which the dereverberated input is copied to the output. Table 1 displays these baselines in full. If the reviewer has a specific baseline in mind that we have not covered, we would welcome the suggestion and are happy to add it. 3) **The error of STFT changes obviously in AVSpeech-Rooms compared with other settings, i.e. about 67 vs 12, what’s the possible reason? It would be nice to give the clarification.** A: AVSpeech-Rooms contains a variety of non-speech sounds (e.g. white noise from air conditioning, clicking/tapping noises from object interactions, and even background music) which make proper reverberation even more challenging. These signals may be perceptually weak, but they will show up on a spectrogram and contribute to the larger STFT error which we observe on AVSpeech-Rooms for all methods (Table 1). In contrast, data in SoundSpaces-Speech (derived from a state-of-the-art acoustics simulator) convolves anechoic audio with a Room Impulse Response based on the room geometry (lines 245-252) to produce audio that contains no artifacts or background noises to contribute to spectrogram errors. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation provided by the authors. I suggest the authors include the comparative results under other metrics mentioned in Q1 in the appendix, to further enhance the persuasiveness of the results. I don't have any other questions, and I still think this is a good piece of work, so I will maintain my score.
null
null
null
null
null
null
Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection
Accept (poster)
Summary: This work builds a unified framework for multi-class anomaly detection (AD) by using normal images only. It identifies the identical shortcut issue of the reconstruction-based AD methods and tries to alleviate it by augmenting the memory with hierarchical discrete iconic prototypes. A switching mechanism is employed to deal with multi-class scenario. Furthermore, the hierarchical prototype-oriented optimal transport module is used to calibrate the anomaly scores. Experiments on MvTec-AD and CIFAR-10 demonstrate the superiority of the method. Strengths: 1. The motivation of using discrete iconic prototypes makes sense to alleviate the “shortcut” of reconstruction on abnormal images. 2. It achieves state-of-the-art performance on the two popular benchmarks. 3. The paper is easy-to-follow and well-written with clear motivation for each module. Weaknesses: 1. Missing necessary comparison on additional datasets, e.g., the VisA dataset [1], a multi-class industrial anomaly detection dataset. 2. As a core contribution, the details of vector quantization (VQ) (e.g., the quantization method) are not thoroughly presented and some related questions are not discussed, e.g., how different quantization precisions/methods affect the performance? [1] Zou, Yang, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. "Spot-the-difference self-supervised pre-training for anomaly detection and segmentation." In European Conference on Computer Vision, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Table 1, why the results under one-for-all setting are better than that under one-for-one setting? For example, the results on Capsule is far lower under one-for-one setting. Though the authors claim that the increased data diversity is beneficial, it requires more experiments to support the argument, e.g., try taking images from different N (N >= 2) classes for training. 2. The ablation study is incomplete. Missing the results when only Hierarchical VQ is enabled. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed in the Discussion section and potential negative societal impacts are not mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your effort in reviewing! A1: According to the reviewer's suggestion, we have implemented experiments on the VisA dataset. The performance is shown in Table 4 of the unloaded PDF. As we can see, our model surpasses the previous one-for-all SOTA method, UniAD, by 1.3\% and 0.1\% for anomaly detection and localization, respectively. Moreover, our model surpasses the previous one-for-one SOTA method, DRAEM, by 12.7\% and 11.7\% for anomaly detection and localization. This demonstrates the effectiveness and robustness on VisA dataset, which is more challenging than the MVTec-AD dataset. We will add the results on the VisA dataset to the revised manuscript. A2: The hierarchical VQ-based layers layer-wisely quantize the visual tokens $h^l$ to the prototypes $e_k^l$ in the learnable codebooks $E^l \in R^{K \times C}$. For the output $h^L$ final layer of encoder, we replace the visual tokens $h^L$ with its most similar prototypes $e_i^L$ in the codebook $E^L$ as: $ \theta = Quantize(h^L)=e_i^L, i= {min}_{j} \| h^L - e^L_j\|_2^2,$ where $\theta$ represents the global quantized vector. The visual token is quantized based on its distance to the prototype vectors in the codebook $E^L$, such that each visual token is replaced by the nearest prototype vector in the codebook, and is transmitted to the decoder. Moreover, we find that merging fine-grained concrete information with abstraction-level semantics is critical for robust anomaly detection. Hence, we fuse the multi-level visual tokens with the global quantized vector $\theta$ to learn hierarchical prototypes, maximizing the preserved nominal information, stated as equation 1 in the main paper. Intuitively, we hierarchically replace visual tokens $h^l$ with their most similar prototypes in the codebook $E^l$ as quantized vector $z^l$. Note that there is no real gradient defined for quantization, however we approximate the gradient similar to the straight-through estimator and just copy gradients from decoder input $z^l$ to embedding before quantization. One could also use the subgradient through the quantisation operation, but this simple estimator worked well for the initial experiments in this paper. We promise to add detailed and clear descriptions of vector quantization in the revised paper. \textbf{Affection of VQ:} As shown in Table 4, the vanilla Transformer obtain 70.5 and 81.4 on AD and AL, while increasing to 96.4 and 96.8 by adding the VQ-layers. The performance of the model increases by nearly 26\%, which demonstrates that VQ plays the key role in anomaly detection. Our VQ module acts as the information bottleneck where only the normal information is allowed to pass through, leading to larger feature migration and information loss for anomalies. This discrepancy of information loss for normal and anomaly serves as a key factor in effective anomaly detection. A1-to-Q1: We think there are two main reasons causing the better performance under the one-for-all setting: 1) The amount of training data is bigger and the training data diversity is increased. Thus, the model representation ability can be improved. Furthermore, diverse training data force the model to learn robust and discriminative representations and separate from each category, leading to tighter boundaries modeled by prototypes of each category. We have replenished experiment on smaller $N$ in Table 5 of the uploaded file. Specifically, we randomly choose 5 categories from all the 15 categories of MVTec-AD for 3 times without overlap, namely $N=5$ in this case. We find that the mean AUCs of detection and localization at this moment are 97.6\% and 97.3\%, respectively. Compared with our model with $N=15$, the detection performance drops 0.4\%, thus it could verify the benefits of training data diversity to some extent. 2) The switching mechanism under the one-for-all setting will classify each input image into a single category and choose the corresponding codebook and expert for reconstruction. For the normal images, it is highly likely to be classified into the correct category and thus switch the proper reconstruction expert and codebook. For the abnormal images, there remains big uncertainty that which reconstruction expert and codebook will be switched, because the anomalies are unseen during training. Thus, the reconstruction uncertainty of the abnormal image is increased. Noting that the difference between normal and abnormal is the key factor deciding the anomaly detection performance. Thus, this uncertainty of anomalous could improve the performance under the one-for-all setting, which also proves that the switching mechanism of our proposed method is more suitable for multi-class anomaly detection. 3) The data distribution of each category is different, corresponding to different requests for model representation ability. It's worth noting that our network architecture is simply implemented without many tricks, such as hyperparameter tuning for each category (\eg. prototype numbers, latent dimension) or adaptive architecture for individual categories. After tuning parameters on the Capsule category, we improve the performance from 88.3 to 94.7 on this category. However, the original intention of the one-for-all case is to save computational resources and achieve efficient modeling for all the category simultaneously. On the whole, our model can efficiently get decent performances under both the one-for-all and one-for-one settings. A2-to-Q2: We have replenished extra ablation studies in Fig. 5 of the uploaded file. As we can see that only employing Hierarchical VQ achieves 97.1 \% and 96.9 \% in detection and location, achieving 26.6\% and 15.5\% compared with the vanilla Transformer. The results can verify that the main performance gain come from our proposed hierarchical VQ-based framework. Potential negative societal impacts: Anomaly detection for video surveillance or social multimedia may raise privacy concerns. --- Rebuttal Comment 1.1: Title: Keep my rating Comment: I appreciate that the authors provide additional experiments (main results on the VisA dataset and the ablation study) and detailed descriptions on the vector quantization. After carefully investigating other reviewers' comments and the rebuttal, the authors solve most of my concerns; however, I would like to keep my original rating due to the limited technical novelty of quantization on memory mechanism, which has been widely explored in the literature. --- Reply to Comment 1.1.1: Comment: Thank you a lot for your effort in reviewing this submission! Please allow us to explain the difference between our model and the previous memory-based works. **The ''short cut'' problem-oriented motivation:** We aim to model discrete space to intrinsically prevent anomaly information leakage into reconstruction. In contrast, the existing memory-based methods recombine and aggregate the discrete memory items, falling into an unknown continuous latent space which might be distorted. In contrast, we force the anomaly features to be replaced by a single discrete prototype. In addition, we would like to highlight that simplicity is the ultimate form of sophistication. Therefore, instead of saying the novelty reside in quantization technique, what we want to express is that VQ is a proper pathway to optimize prototypes for crimping information bottleneck, which is an effective way to achieve our purpose rather than the novelty itself. **Hierarchical designation is necessary and crucial:** Rather than directly employ the original vector quantization mechanism, we elaborately investigate a cascaded VQ transformer to overcome the **"prototype collapse"** problem: At some point during training, a part of latent codes in the codebook may no longer work and the modeling capacity is limited by the discrete representations, resulting in collapsed reconstruction. **The hierarchical designation is not easy**, as it's sophisticated to balance the prototypes of different levels to maximize the nominal information available. As shown in table 5, another hierarchical structure results in large performance drop. Furthermore, the hierarchical designation fits for the hierarchical nature of vision, and matches to calibration of the anomaly score with hierarchical prototype-oriented optimal transport, which could also reduce the decoding search time and retain high inference speeds. Even though HVQ-Trans is based on vector quantization mechanism, it achieves significantly better anomaly detection performance than the vanilla VQ model and the previous memory-base algorithms. **VQ-based Transformer:** This paper proposed a original way to leverage the iconic prototypes into Transformer, which properly fuses the hierarchical nominal informations and tighten the information bottleneck. We believe there is a crytic tradeoff for the information bottleneck, where more information passby will cause the anomaly leakage and less information passby will lead to poor reconstruction. Our HVQ-Trans could well handle this connotative issue. **Switching prototypes rightly fits for one-for-all setting:** Our model targets for the challenging 'one-for-all' case, which suffers from the 'identical shortcut' issue more severely, as the model generalizability increases due to the complex data distribution of multiple classes. To tightly fit the unified case, we propose the switching prototypes to set corresponding codebook for different data distribution, thus alleviating the `identical shortcut' issue. Please also consider that the paper also provides novel techniques, such as switching experts and the prototype-oriented learning and scoring, to fit for the challenging and practical one-for-all setting. We want to express the quantization technique is **not just simply employed in this paper as explored in the literature.** In the context of visual anomaly detection, we firmly believe that our method provide a new way to model discrete space without confusion, which intrinsically prevents anomaly information leakage into reconstruction, constitutes meaningful contributions. As the deadline for discussions between reviewers and authors is approaching, we sincerely invite you to take a moment from your busy schedule to read our reply. Thanks again for your valuable time! We are genuinely grateful for your response and help. Best regards, Anonymous author
Summary: This paper proposes a variational autoencoding framework for unsupervised anomaly detection. This work addresses the identical shortcut issue by preserving the typical normal patterns as discrete iconic prototypes and also overcomes the problems of prototype collapse problem. Strengths: It designs a network from the perspective of solving an issue identical shortcut. The method achieves good experimental results on classical datasets. Weaknesses: 1. The descriptions of the figures need to be clearer. For example, in Fig.1, it would be helpful to label each image and coordinate system to indicate what they represent. Additionally, labeling the subfigures with (a) and (b) would make the explanation clearer, such as in Fig.5. 2. The performance on chosen datasets are close to saturation, especially in some categories where the results achieve 100, this is unreasonable. In this field, more challenging datasets should be tested to further push the boundaries of research. 3. The performance of proposed method under one-for-one setting still has a gap compared to the SOTA method. 4. The innovation is limited, where most of the ideas have already be proposed. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Why the proposed methods under one-for-one setting has lower performance than the one-for-all setting, which differs from those in most other methods? What is the reason for the significant performance differences of the proposed method on different categories, such as poor performance on “Toothbrush”, and is this explainable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: The authors have clarified the limitation of this method that the category labels are assumed to be available during the training stage, and identify this issue as future study by incorporating the model with clustering methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A1: Thanks for your suggestions. We have revised the descriptions and figures to be clearer. Due to the page limitation, we show revised Fig. 1 and Fig. 5 in the uploaded PDF. We promise to carefully polish all the descriptions and figures in our revised version. A2: Thanks for your suggestion. 1) As the reviewer mentioned, the performance on the wildly-utilized industrial anomaly detection dataset MVTec-AD tends to saturation. This might be because the images in the dataset have simple backgrounds and high resolution, thus the anomalies are relatively easy to be detected. In contrast, our model targets for the challenging ‘one-for-all’ case, which suffers from the ‘identical shortcut’ issue more severely, as the model generalizability increases due to the complex data distribution of multiple classes. The performance under the ‘one-for-all’ setting on MVTec-AD still needs to be improved. 2) It's worth noting that we also demonstrate experiments on a challenging dataset, \ie, CIFAR-10, which contains complex background and various anomalies. Our model achieves 86.1, while the comparison models result in \{55.9, 55.8, 78.9, 72.4, 72.1, 82.1\} for anomaly detection. 3) According to the reviewer's suggestion, we have further implemented experiments on another industrial anomaly detection dataset, the VisA dataset. The performance is shown in Table 4 of the unloaded PDF. As we can see, our model surpasses the previous SOTA method, UniAD, by 1.3\% for anomaly detection on the new dataset. We will add the results on the VisA dataset to the revised manuscript. A3: We hope to address this concern from the following aspects: 1. Focusing on the one-for-all model handling the one-for-one task, our model surpasses UniAD (the SOTA method for unified case) from 96.6 to 96.9 for anomaly detection, and improves the anomaly localization performance from 96.6 to 97.1. Thus, under this fair comparison setting, our model could better generalize to the one-for-one setting. 2. Without exhausted individual parameter tuning for each category, we set the same hyperparameters for each category, which might lead to a gap. For example, the category of texture or simple objects, such as bottle and grid, possess limited iconic normal patterns. The corresponding number of prototypes might be smaller than those complex objects. Rather than specific and exhausted model tuning for each category, our network architecture is simply implemented under the one-for-one setting. Accordingly, we particularly tune the hyperparameters on the 'Screw' category in the MVTec-AD dataset, and result in 94.5\% performance, surpassing all the comparison methods. Besides, although DRAEM (the SOTA one-for-one method) achieves good performances under the separate case, it drops severely (9.9\% and 10.1\%) changing to the unified case. Thus, on the whole, our model achieves decent performance in both settings. 3. Last but not least, we want to reclaim that our purpose is to alleviate the pivotal ‘identical shortcut’ issue in anomaly detection under the challenging one-for-all setting. When changing the one-for-one setting to the one-for-all setting, most of the existing alternatives fail to handle the challenging cases. Apart from the UniAD getting a slight drop (0.1\%) for detection, the other methods drop severally (12.4\% on average) when changing to the one-for-all setting. However, our model gets a 1.1\% performance gain for anomaly detection. This indicates the framework of UniAD and our model could handle the challenging setting. Thus, instead of saying there is a gap compared to the SOTA method on the separate case, what we want to express is that there is no performance drop (even a slight improvement) from the separate case to the unified case. A4: We think there are two main reasons causing the better performance under the one-for-all setting: 1) The amount of training data is bigger and the training data diversity is increased. Thus, the model representation ability can be improved. Furthermore, diverse training data force the model to learn robust and discriminative representations and separate from each category, leading to tighter boundaries modeled by prototypes of each category. 2) The switching mechanism under the one-for-all setting will classify each input image into a single category and choose the corresponding codebook and expert for reconstruction. For the normal images, it is highly likely to be classified into the correct category and thus switch the proper reconstruction expert and codebook. For the abnormal images, there remains big uncertainty that which reconstruction expert and codebook will be switched, because the anomalies are unseen during training. Thus, the reconstruction uncertainty of the abnormal image is increased. Noting that the difference between normal and abnormal is the key factor deciding the anomaly detection performance. Thus, this uncertainty of anomalous could improve the performance under the one-for-all setting, which also proves that the switching mechanism of our proposed method is more suitable for multi-class anomaly detection. The reason for the significant performance differences: The performance differences in different categories are due to: i) the data distribution of each category is different, corresponding to different requests for model representation ability. ii) our network architecture is simply implemented without many tricks, such as hyperparameter tuning for each category (\eg. prototype numbers, latent dimension) or adaptive architecture for individual categories. On the whole, it gets efficient performances under both the one-for-all and one-for-one settings. However, this insightful observation refers to a common phenomenon for all those one-for-all methods, please refer to UniAD. Even for the one-for-one method, such as the well-known CutPaste, its performance on cable is only 81.2 due to the complex and noisy data distribution. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ksQL, We deeply appreciate your thoughtful review and your time. Following your constructive suggestions, we have discussed the performance saturation and explained the performance gaps under one-for-one setting with extra experimental results, and revised the descriptions and figures. We have tried our best to address the mentioned concerns/problems in the rebuttal. We would like to know if you have anything unclear or so. Please feel free to let us know. We are delighted to clarify them. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Authors --- Rebuttal Comment 1.2: Title: I keep my initial decision! Comment: After carefully read the rebuttal, I think this paper which with the problems of technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations, are still unsolved. --- Rebuttal Comment 1.3: Title: The decision is keeping consistent. Comment: The method is a stacked work with the previous techniques, the experimental results and analysis are not attracting. --- Reply to Comment 1.3.1: Comment: Thanks for your reply. We sincerely hope the reviewer ksQL could **concretely point out** the technical flaws, evaluation weakness, inadequate reproducibility, as well as the ethical considerations. First, it is really difficult for us to figure out what is the **ethical problem** in anomaly detection, as a wide-studied filed. Furthermore, we have enough confidence to **firmly uphold our reproducibility**, as our code has been attached in the supplementary files and will be released to public. In addition, we have demonstrate exhaustive experiments on three public datasets, ie. MVTec-AD, CIFAR-10 and VisA, compared to ten methods published in recently two years (after 2021). The competetive experimental results and intrinsically visualization could verify our technical soundness. As for the concern of stacked work with the previous techniques, we hope to resolve it from the following aspects: 1) Our model is problem-oriented and elaborately-designed to alleviate the pivotal `identical shortcut' issue in anomaly detection. In contrast to most previous methods that model the continuous latent space and suffer from good generalizability to anomalies, we aim to model discrete space to intrinsically prevent anomaly information leakage into reconstruction. However, the existing memory-based methods recombine and aggregate the discrete memory items, falling into an unknown continuous latent space which might be distorted. In contrast, we force the anomaly features to be replaced by a single discrete prototype. VQ is a proper pathway to optimise prototypes for crimping information bottleneck. Rather than directly employ the original vector quantization mechanism, we elaborately investigate a cascaded VQ transformer to overcome the "prototype collapse" problem, which could also reduce the decoding search time and retain high inference speeds. This designation fits for the hierarchical nature of vision, and matches to calibration of the anomaly score with hierarchical prototype-oriented optimal transport. As far as we know, this is the first try to impose strict restrictions on discrete latent space, and it is the first time to verify the validity of VQ for anomaly detection. 2) The optimal transport learning is not only developed to facilitate prototype learning as the previous works, but also dexterously measure the feature level anomaly score to robustly and accurately identify anomalies. 3) Our model targets for the challenging 'one-for-all' case, which suffers from the 'identical shortcut' issue more severely, as the model generalizability increases due to the complex data distribution of multiple classes. To tightly fit the unified case, we propose the switching mechanism to choose corresponding codebook and expert for different data distribution, thus alleviating the `identical shortcut' issue. In the context of visual anomaly detection, we believe that our method provide a new way to model discrete space to intrinsically prevent anomaly information leakage into reconstruction, constitutes meaningful contributions.
Summary: This paper proposes a feature reconstruction based framework for multi-class anomaly detection, called hierarchical vector quantized Transformer (HVQ-Trans). To address the "identical shortcut" problem occurring in the reconstruction-based framework, the proposed method replaces the original encoding features with the nearest iconic prototypes learned from normal training data, and then decoded with a VQ-based transformer decoder to reconstruct the anomaly regions into normal regions. Besides, a hierarchical prototype-oriented learning and anomaly scoring method is developed to guide prototype learning and accurately identify anomalies. Strengths: 1. The motivation is clear and reasonable 2. The writing quality and paper structure are good. 3. The idea of using vector quantization for feature reconstruction in anomaly detection is interesting. The proposed method is technically sound. 4. The proposed method obtains decent performance improvement than the current methods. Weaknesses: 1. Using prototypes may lose high-frequency information leading to imprecise feature reconstruction. 2. Is the optimal transport necessary? How about only simply using similarity scores as the weights? 3. The proposed HVQ-Trans has large model parameters. Please compare the proposed method with UniAD with the inference speed and model parameters. 4. Please add more visualization comparisons with related methods, such as UniAD. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see the weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see the weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your positive comments for this submission! We have tried our best to address the mentioned concerns in the rebuttal. Feel free to let us know if there is anything unclear or so. We are happy to clarify them. A1: Yes. We want to emphasize that the different degrees of imprecise reconstruction between normal and anomaly are the key factor in anomaly detection. In other words, we pursue the difference in reconstruction ability between normal and anomaly, rather than imprecisely reconstruct them. In specific, during training, the typical normal patterns are recorded in the discrete variables, i.e., iconic prototypes. When encountering the anomalous during the testing stage, the abnormal patterns will also be quantized as the normal prototypes, leading to larger feature migration and information loss, highlighted by higher reconstruction error. It is worth noting that while information loss triggered by VQ is exist for normal images, it is further significantly pronounced for anomaly images. Thus, this discrepancy in information loss serves as a key factor in effective anomaly detection. By investigating this difference, we can enhance the accuracy of our model in distinguishing abnormal regions. A2: We employ optimal transport (OT) for better exploring relationships between learnable prototypes and visual features of input images. Specifically, traditionally used similarity scores such as Euclidean or Cosine only concern the point-wise relationship between two sets and pose equally important prior to any such kind of relationship. In practice, however, it is usually sub-optimal to introduce such non-informative prior and leads to poor performance. In our model, optimal transport learning is not only developed to facilitate prototype learning with the regularization on the feature distribution level, but also dexterously measure the feature level anomaly score to robustly and accurately identify anomalies. Moreover, as we reported in Table 4 of the main paper, we can see the POT module leads to 0.4\% and 0.1\% performance gain on anomaly detection and localization, respectively, on the MVTec-AD dataset. Furthermore, as shown in Table 1 in the appendix, the POT module results in 2.6\% improvements on the CIFAR-10 dataset. This demonstrates the POT module is effective to detect anomalies due to its well-established measurement alignment, especially showing the stability for the complex scenarios of CIFAR-10. A3: The comparison results are listed in Table 2 of the uploaded PDF. With the image size fixed as $224 \times 224$, we compare our model with all competitors regarding the inference FLOPs and learnable parameters. We can tell that the advantage of our approach does not come from a larger model capacity. This table will be included in the camera ready. A4: The visualization comparisons with UniAD are shown in the unloaded PDF. More visualization comparisons will be added to the appendix of our revised paper.
Summary: This paper introduces a novel approach to multi-class anomaly detection (AD) by integrating hierarchical embedding vector quantization. To tackle the problem of identical shortcuts in the reconstruction-based AD paradigm, the authors suggest to enlarge abnormality's reconstruction residue by introducing discrete prototypes in the model. Additionally, the model incorporates a switching mechanism to further improve the feature reconstruction process. The proposed model is evaluated on MVTec and CIFAR-10 datasets and compared against prior arts. Strengths: 1. The paper is well-written and easy to follow. 2. The hierarchical vector quantized transformer is well presented. Appendix about the loss back-propagation explains parameter updates regardless quantization operation. 3. Extensive experiments are conducted. Ablation studies on model components suggest effectiveness of the proposed method. Weaknesses: 1. The vector quantization mechanism described in this paper appears to be a specific instance of the memory mechanism, where a continuous-valued feature is substituted with a numerical prototype. If this is indeed the case, such a memory mechanism has been widely employed in previous studies on AD. 2. What does 'C' represent in Figure 2 and Figure 3? Is it indicating feature concatenation? The paper briefly mentions feature aggregation but does not provide a specific definition or explanation of the feature aggregation operation. 3. Within the switching mechanism, a multi-category classifier, N codebooks, and reconstruction experts are necessary. In this scenario, can we consider the model as a combination of a single encoder and N decoders that are specific to each class? 4. The experimental evaluation compares the proposed method with US, PSVDD, PaDim, CuPaste, MKD, and DREAM. However, there exist several recently proposed AD models, such as Patchcore [1], RD4AD [2], CS-Flow [3], UTRAD [4], which outperform the aforementioned methods. It is highly recommended to include these methods in the experimental analysis as well. [1] Karsten Roth et al. “Towards total recall in industrial anomaly detection”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, pp. 14318– 14328. [2] Hanqiu Deng and Xingyu Li. “Anomaly Detection via Reverse Distillation from One-Class Embedding”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, pp. 9737–9746. [3] Marco Rudolph et al. “Fully convolutional cross-scale-flows for image-based defect detection”. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022, pp. 1088–1097. [4] Liyang Chen et al. “UTRAD: Anomaly detection and localization with U-Transformer”. Neural Networks 147 (2022), pp. 53–62. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. Thanks. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No discussion on limitations and potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your positive comments! We have tried our best to address the mentioned concerns/problems. Feel free to let us know if there is anything unclear or so. We are happy to clarify them. A1: To some extent, our vector-quantized prototype can be regarded as a special kind of memory item. As the reviewer mentioned, a branch of approaches~\cite{roth2022towards,gong2019memorizing,xiang2021painting} have investigated the memory-augmented networks recently, which augment the deep autoencoder with a memory module to record the normal information of training data. These methods hope to obtain low reconstruction error for normal samples and highlight the reconstruction error if the input is not similar to normal data, that is, an anomaly. The relevant memory items are retrieved and weighted averaging all the related memory content are aggregated into the decoder for reconstruction. Thus, we claim three differences between our proposed method with the previous methods: 1) The discrete memory items are recombined and weighted averaged in previous works, falling into an unknown continuous latent space that might be distorted. Intuitively, some anomalous regions can not be reconstructed by the discrete latent memory but could be decoded from the unknown latent space. In contrast, we set a strict information bottleneck to enforce the abnormal data point flipping to a normal data point, constraining the leakage of abnormal information. 2) The prototypes are individually learned for each category, and adaptively chosen by a switching mechanism. This mechanism tightly fits the unified case, which suffers from the `identical shortcut' issue more severely, as the model generalizability increases due to the complex data distribution of multiple classes. We choose specific codebooks for different data distributions to alleviate the 'identical shortcut' issue. 3) A hierarchical VQ-based approach is developed to better overcome the "codebook collapse" problem and effectively merge fine-grained concrete information with abstract semantics to maximize the nominal information available. 4) In addition, we introduce the POT module to guide the learning process of prototypes with the help of OT theory, to facilitate prototype learning and dexterously measure the feature level anomaly score. On the contrary, the existing memory-based methods always use cosine similarity or Euclidean distance to learn memory vectors. A2: Yes, the `C' represents the feature concatenation. The feature aggregation includes two steps: Firstly, the visual tokens are concatenated with the global quantized vector $\theta$; Secondly, a layer-wise embedding function $\Upsilon^l(\cdot)$ is utilized for feature fusion. Then the Quantization is performed on the aggregated features. Specifically, we fuse the multi-level visual tokens $h^{l-1}$ with the global quantized vector $\theta$ to learn hierarchical prototypes, maximizing the preserved nominal information, stated as $\texttt{Quantize}(\Upsilon^l\left(\left[ h^{l-1}, \theta \right]\right)) = e_k^l$. Here, $\left[ \cdot \right]$ denotes the concatenation operation, and $\Upsilon^l(\cdot)$ refers to the embedding function. We replace the fused feature with its most similar prototype $e_k^l$ in the codebook as the corresponding quantized vector. A3: The switching mechanism did contain a multi-category classifier, N codebooks, and N reconstruction experts. However, the reconstruction experts are only the final part of our HVQ-Transformer decoder. As shown in Figure 2 in the main paper, the HVQ-Transformer has four graded decode layers. In each decoder layer, the refined queries $q^l$ from the previous layer are crossly connected to the quantized normal prototypes $z^l$, through the multi-head cross-attention layer. Hence, the values at abnormal regions of $q^l$ will be suppressed, and the abnormal signals could be rarely transmitted for reconstruction. The switching reconstruction experts are placed after the four decoder layers, for flexible feature reconstruction. Specifically, the visual tokens from the last \textit{VQTrans-dec} layer are depicted as $d^0 \in \mathbb{R}^{N \times C}$, which is expected to reconstruct the patch features as $\Psi_m(d^0)$, where $m^{th}$ expert network $\Psi_m$ is selected for reconstruction. A4: We replenish the experiments compared to all the mentioned methods, as shown in Table 1 of the uploaded PDF. The results show that we achieve superior performances. Limitations: The limitation and potential negative social impact are presented in the 'discussion' part of the conclusion section. --- Rebuttal Comment 1.1: Title: Thanks for the feedback from authors. Comment: The authors answer most of my questions. But I am still concerning the innovation of the vector quantization mechanism. So I keep my original rating. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you a lot for your effort in reviewing this submission! We cautiously wish to assert the innovation of our hierarchical vector quantized (HVQ) Transformer. Rather than directly employ the original vector quantization mechanism, we elaborately investigate a cascaded VQ transformer to overcome the "prototype collapse" problem, which could also reduce the decoding search time and retain high inference speeds. This designation fits for the hierarchical nature of vision, and matches to calibration of the anomaly score with hierarchical prototype-oriented optimal transport. Even though HVQ-Trans is based on vector quantization mechanism, it achieves significantly better anomaly detection performance than the naive VQ model and the previous memory-base algorithms. Please consider that the paper also provides novel techniques, such as switching mechanism and the prototype-oriented learning and scoring, which deliver stable training and improved performance. In addition, we would like to highlight that simplicity is the ultimate form of sophistication. VQ is a proper pathway to optimize prototypes for crimping information bottleneck, which is an effective way to achieve our purpose rather than the novelty itself. In the context of visual anomaly detection, we firmly believe that our method provide a new way to model discrete space without confusion (recombine or aggregation), which intrinsically prevents anomaly information leakage into reconstruction, constitutes meaningful contributions. Thanks again for your valuable time! We are genuinely grateful for your response and help. Best regards, Anonymous author
Rebuttal 1: Rebuttal: Dear reviewers and AC Thanks a lot for your effort in reviewing this submission! We have tried our best to address the mentioned concerns/problems in the rebuttal. Feel free to let us know if there is anything unclear or so. We are happy to clarify them. Best, Authors Pdf: /pdf/9b5a29dffbf3ca6f03fec8686808e55f6a30c8e6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes using hierarchical vector-quantized transformer-based autoencoders for image anomaly detection. The key contribution is the use of prototypes learned using an optimal transport algorithm. Strengths: The paper addresses a critical problem in reconstruction-based anomaly detection (good reconstruction of both normal samples and anomalies) and proposes a reasonably solid solution based on discrete latent prototypes. Weaknesses: 1) The first and foremost problem with the proposed approach is the claim that it is "unsupervised". As the paper admits in the final discussion paragraph on page 9, the current work assumes that the category labels are available during the training stage. In that case, how can it be claimed that the proposed method is unsupervised? Also, this may make comparison with unsupervised approaches such as UniAD unfair. 2) The second critical issue is the novelty of the proposed solution because it combines some well-known ideas such as vector-quantized VAE [21] and hierarchical VQ-VAE [17] and some recent ideas such as learning latent prototypes for autoencoders using optimal transport [A, B]. [A] Bie et al., "Learning Discrete Representation with Optimal Transport Quantized Autoencoders", 2023 [B] Oliveira et al., "Improving Variational Autoencoders Reconstruction Using Prototypes", 2023. However, in my opinion, the paper smartly combines these known ideas and makes some marginal improvements to implement a transformer-based VAE model. 3) The switching mechanism and mixture-of-experts concepts on page 4 have not been clearly explained. In particular, it is not clear what the so-called "classifier for producing logits" and "expert" mean and how these components tie in with the POT module described subsequently. Furthermore, the ablation study talks about "codebook switching" and "expert switching", but these are never described in the paper. 4) Though the paper claims that the proposed method achieves SOTA results, it is not obvious if this is true. Except for UniAD (published in NeurIPS 2002), all the other baseline methods selected for benchmarking are from 2020 or 2021. A cursory glance at more recent works such as [C, D] indicates that the reported performance results fall short of SOTA results. [C] Liu et al., "SimpleNet: A Simple Network for Image Anomaly Detection and Localization" CVPR 2023 [D] Tien et al., "Revisiting Reverse Distillation for Anomaly Detection", CVPR 2023 5) The calibration of anomaly score based on POT has been emphasized many times, but there appears to be no experiment to demonstrates the importance of this idea. 6) It is not clear how to determine the number of prototypes required? Will this depend on the number of classes (or number of anomaly types)? Why does the performance start decreasing when the number of prototypes start increasing beyond 512? 7) There is no mention about the computational complexity of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: All the limitations of the proposed method have not been discussed and addressed. The paper does not appear to have any potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A1: Thanks! 1) The unsupervised AD methods only utilize the normal images during training stage, while both the abnormal and normal images are utilized at the testing stage. Thus, the 'unsupervised' in AD commonly refers to inaccessible to any abnormal data as supervision at the training stage. 2) The category labels are accessible in advance and UniAD also claims that ``The category labels may help the model better fit multi-class data. How to incorporate the unified model with category labels should be further studied''. Thus, in this work, we make full use of category information to better serving our model. Furthermore, we replenish an experiment without category information (no switching mechanism), resulting in 97.4 and 97.2 for anomaly detection and location, which surpass UniAD. 3) Under the one-for-one case, the category information is accessible to all the comparison models, including UniAD. Under this case, our model surpass UniAD (the SOTA method for unified case) by 0.3\% and 0.5\% for anomaly detection and localization. A2: Our model is problem-oriented and elaborately-designed to alleviate the pivotal `identical shortcut' issue in anomaly detection. 1) In contrast to most previous methods that model the continuous latent space and suffer from good generalizability to anomalies, we aim to model discrete space to intrinsically prevent anomaly information leakage into reconstruction. However, the existing memory-based methods recombine and aggregate the discrete memory items, falling into an unknown continuous latent space which might be distorted. In contrast, we force the anomaly features to be replaced by a single discrete prototype. Therefore, VQ is a proper pathway to optimise prototypes for crimping information bottleneck. Although VQ-VAE is a well-known technique, it rightly fits our motivation. As far as we know, this is the first try to impose strict restrictions on discrete latent space, and it is the first time to verify the validity of VQ for anomaly detection. 2) Moreover, the optimal transport learning is not only developed to facilitate prototype learning as the previous works, but also dexterously measure the feature level anomaly score to robustly and accurately identify anomalies. 3) The hierarchical structure is originally designed by merging fine-grained concrete information with abstraction-level semantics, which could avoid the issue of codebook collapse, as shown in Table 5. 4) In addition, our model targets for the challenging 'one-for-all' case, which suffers from the 'identical shortcut' issue more severely, as the model generalizability increases due to the complex data distribution of multiple classes. To tightly fit the unified case, we propose the switching mechanism to choose corresponding codebook and expert for different data distribution, thus alleviating the `identical shortcut' issue. A3: 1) The switching mechanism contains a multi-category classifier, $M$ codebooks, and $M$ reconstruction experts. The multi-category classifier takes the image feature as input and output the classification probability (logit) over $M$ category. In order to fit the data diversity property in one-for-all setting, we switch individual codebook (including a group of prototypes) from $M$ codebooks, according to the classification probability. Furthermore, we switch individual reconstruction network (expert) for decoding features from the last decoder layer to re-build the input features from the pre-trained EfficientNet. 2) As for POT, it is used to regularize the relationships between features and vector-quantized prototypes, making them better matched compared with Euclidean or Cosine distance used in original VQ-VAE. Therefore, POT only takes effects with features at hierarchical layers. 3) 'codebook switching' and 'expert switching' all refer to the action that classifier choose corresponding codebook or reconstruction network. A4: We replenish the experiments compared to the mentioned methods in Table 1 of the uploaded PDF. The results show that we achieve superior performances. A5: As we reported it in Table 4 of the ablation study, we can see the POT module leads to 0.4\% and 0.1\% performance gain on anomaly detection and localization, respectively, on MVTec-AD dataset. Furthermore, as shown in Table 1 in appendix, the POT module results in 2.6\% improvements on CIFAR-10 dataset. This demonstrate the POT module is effective to detect anomalies due to its well-established measurement alignment, especially showing the stability on the complex scenarios of CIFAR-10. A6: The prototypes are grouped into $M$ categories. We only decide the number of prototypes $K$ per group, which is related to the complexity of the data distribution. For the category of texture or simple object, such as bottle and grid, the iconic normal patterns are limited. The corresponding number of prototypes can be set smaller than those complex objects. Without exhausted parameter tuning for individual category, we set the same number of prototypes for each category. We found that 512 is an empirically proper setting, which performs averagely decent on all the 15 categories. This might because: i) The smaller number of prototypes can't cover all the iconic normal patterns for the complex objects, leads to poor reconstruction ability; ii) Overmuch prototypes will cause the abundance and repetition of iconic normal patterns, which is harmful to model training and disturbs the precise vector quantization. Specifically, redundant prototypes may never be used and optimized during training. At inference, those non-optimized prototypes may be closer with anomaly patterns, leading to leakage of abnormal information, and finally resulting in poor anomaly detection. Moreover, it's worth noting that different numbers of prototypes can consistently surpass all the competitors in Table 1 and Table 2. A7: We replenish the computational complexity in Table 2 of the uploaded PDF. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I thank the authors for their detailed response, which address some of my core concerns (regarding the switching mechanism and unsupervised claim). Hence, I'm inclined to increase my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable suggestions that help us improve the manuscript. We are glad that you increase your rating for this paper. Thanks again for your effort in reviewing this submission!
null
null
null
null
null
null
Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse Problems
Accept (poster)
Summary: Inspired by the No-Lips literature on the optimization of convex objectives which are not globally L-smooth, this paper adds to the PnP literature the “Bregman score denoiser” which extends the BPG algorithm by a Bregman-based prox-map along with convergence conditions despite NN-parametrized non-convex potentials are involved. Strengths: Originality: This is a plausible extension of recent related work on PnP networks based on state-of-the-art theory from the field of numerical optimization. Quality: It is apparant that the authors do not both fields very well. Significance: The paper adds a new concept to the PnP literature. Weaknesses: Clarity: The presentation intersperses references, top-level arguments and technical details in a confusing manner. I had to read few times forth and back in order to get an idea what this paper is about. Authors criticize “unrealistic assumptions” (e.g. line 92) in related work but have to admit later on that their own assumptions are hard to check as well, even for a simple scenario (lines 275-277). The need for backtracking line search is not convenient either. Significance: Regarding the theory, I did not get if some `generic’ properties of the Bregman-prox-maps exist, that play the very same role like, say, the firmly-nonexpansiveness of Euclidean prox-maps, to achieve convergence in the considered generalized scenarios. The concrete scenario (Poisson noise) is classical. A comparison to a related approach shows no improvement (Fig.1, (c), (d)). In particular, artificially corrupting images looks like old-style image denoising papers. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Before eq. (14), the convexity assumption regarding F and R, adopted in [Bauschke et al 2017] to which you refer, is missing. line 92: why is nonexpansiveness considered as an “unrealistic requirement” to ensure convergence pf PnP, in view of your own assumptions which are not easy to check either? How is the set C in (18) determined? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: This point has not been explicitly addressed in the paper, apparently. Please comment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses:** - We first bring more details on our assumption of convexity of $\psi\_\gamma \circ \nabla h^\star(x)$, compared to the common assumption of nonexpansivity of the denoiser. It is observed in different works [1,2,3] that a state-of-the-art network trained to denoise without additional constraints is **not nonexpansive**, and constraining a network during training to be nonexpansive can severely degrade its denoising performance (see for example [1, Table 1], [2, Figure 1], [3, Table 5&6]). A semi-theoretical reason for this fact is that, by minimizing the $L^2$ cost, the denoiser is trained to approximate the true MMSE denoiser, which is *not nonexpansive*. This is why we call nonexpansivity an "unrealistic assumption”. Conversely, as proved in the global rebuttal to all reviewers, our assumption (convexity of $\psi\_\gamma \circ \nabla h^\star(x)$) **is verified by the true MMSE denoiser**, and is thus more realistic. This explains why, without any additional constraints, our trained denoiser verifies this assumption. We would like to put forward the distinction between *verifying (or checking)* the validity of the assumption, after training, and *enforcing* the assumption, while training. Note that our convexity condition is not *"hard to check"* but *hard to enforce explicitly*. Indeed, the convexity condition can be easily verified empirically, after training, with (26)-(27). Please refer to the global rebuttal to all reviewers, where we discuss and plot, in the attached pdf, empirical verifications of this convexity. We observe that the condition (26)-(27) is clearly verified even far way from the image manifold. In comparison, coming back to the nonexpansivity assumption, finding the Lipschitz constant of a neural network being NP-hard, *verifying* the nonexpansivity of a neural network is typically done with a similar empirical verification. Moreover, previous methods *enforcing* nonexpansivity of a denoiser while training do not have better theoretical guarantees. Indeed, existing methods enforce nonexpansivity by either: - normalizing each layer by its spectral norm, with the spectral norm *approximated* via the power method [4]. As this is only a local approximation of the Lipschitz constant, there is no strict guarantee to get nonexpansivity. - regularizing the training loss [5,3] and thus without strict guarantee either. Let us also point out that the convexity condition is required for the convergence of the B-PnP algorithm but not for the convergence of B-RED. B-RED has then exact convergence guarantees. - Concerning the backtracking line search strategy, it allows to keep a fast algorithm along with convergence guarantees as it automatically sets the stepsize to its maximal value for convergence. Backtracking is commonly used when the Lipschitz (or NoLip) constant of the gradient of the smooth potential is unknown, which is typically the case when regularizing inverse problems with deep explicit priors, see for example [6]. - Regarding the "generic" properties of the Bregman-prox-maps, Euclidean proximity operators of *nonconvex potentials* are not firmly-nonexpansive (this is actually why our denoiser is not nonexpansive) . The convergence of proximal algorithms like Proximal Gradient Descent in the nonconvex setting is only due to the *sufficient decrease property* obtained by combining the first order optimality of the prox and the descent lemma on the potential with Lipschitz gradient. This is the same idea for the proof for BPG algorithm with a Bregman prox, where the descent lemma holds thanks to the NoLip (4) property, which replaces the Lipschitz gradient assumption. - Concerning denoising, Figure 1 is not evaluating Poisson denoising but Inverse Gamma noise denoising. In this experiment, we do not expect the B-DRUNet denoiser to outperform the DRUNET denoiser. Indeed, both are based on the same architecture but the former is additonally constrained to take the specific form (25). We are thus satisfied with the fact that this specific B-DRUNET almost reach the performance of DRUNET (difference of ~0.05 dB). - Eventually, regarding the artificial corruption of images, the purpose of training a denoiser using artificial Inverse Gamma noise (or artificial Gaussian noise for standard Plug-and-Play methods) is to establish a robust deep prior that effectively regularizes inverse problems. The selection of the (artificial) Bregman noise model holds significant importance in ensuring that our Bregman Score denoiser offers a meaningful prior. Once this prior is established through training, it becomes applicable in an unsupervised manner for any inverse problem, even for restoring images corrupted with real noise. **Questions:** - The NoLip condition will be added before eq (14), thanks for noticing this omission. - Concerning the "nonexpansivity considered as an unrealistic requirement”, please refer to our comment above. - In practice, we choose $C = [0,1]^n$. **Limitations:** Thanks for pointing this out. In the global rebuttal to all reviewers, we provide a limitation paragraph which will be added to the conclusion. [1] Hertrich et al. "Convolutional proximal neural networks and PnP algorithms". In Linear Algebra and its App., 2021. [2] Bohra et al. "Learning lipschitz-controlled activation functions in neural networks for PnP im. rec. methods". NeurIPS Workshop on DL and Inv Prob, 2021. [3] Hurault et al. "Proximal denoiser for convergent PnP optimization with nonconvex regul.". ICML 2022. [4] Ryu et al. "PnP methods provably converge with properly trained denoisers". ICML 2019. [5] Pesquet et al. "Learning maximally monotone operators for im. recovery." SIAM Journal on Im Sciences, 2021 [6] Romano et al. "The little engine that could: Regularization by denoising (red)". SIAM Journal on Im Sciences, 2017. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I am more concerned about theoretical aspects than about experimental performance. In this respect, your arguments are comprehensible. I stick to my positive score.
Summary: This paper studies an extension of the Plug-and-Play (PnP) framework for solving inverse imaging problems by considering descent schemes in metrics different from L2: Motivated by the fact that some data fidelity terms such as the Kullback-Leibler divergence allow for an efficient minimization with the Bregman Proximal Gradient (BPG) method (an extension of proximal gradient descent to arbitrary Bregman distances instead of squared L2), the authors develop a parametrization of a learnable denoiser which can be interpreted as a proximal operator (or descent step) of a cost function w.r.t. the underlying Bregman distance that combines well with a particular data fidelity term. Under some additional assumptions, this allows proving the convergence of the resulting Bregman PnP framework. Numerical experiments illustrate that the resulting scheme can successfully solve deblurring problems with Poisson noise. Strengths: The paper is technically sound and presents the technical construction of the Bregman PnP approach very well. It is an elegant solution that closes a gap missing in the PnP (and RED) framework with learnable priors. I found it very convincing in terms of its theory and it even derives some general (smaller) missing pieces for convergence beyond the seminal works by Bolte, Bauschke, Teboulle and co-authors. Despite this strengths section being short in comparison to the weaknesses, I think that the strong theoretical contribution along with an illustration of the implementation outweighs the weaknesses in terms of (benchmark) results, such that I am leaning towards accepting the paper. Weaknesses: The numerical experiments/results are not very convincing from a practical point of view. - First, the denoising performance of the dedicated network is not better than that of a plain network (might not be very important). - Second, the main paper does not compare to the plain PnP approaches with L2 fidelity. The supplementary material reports tiny differences only (with the proposed approach being slightly worse for large noise and slightly better for small noise) - According to line 275, the desired condition of $\phi_\gamma \circ \nabla h^*$ being convex is not enforced but seems to hold empirically when training the network. Thus, any convergence guarantee is lost. - Figure 5 in the supplementary material shows a concerning risk of ending up with a bad result. The mitigation strategy of first running 100 iterations with a first set of parameters and then switching to a different set of parameters represents a significant amount of fine-tuning (possibly exceeding the number of hyperparameters and amount of fine-tuning used for the standard L2 PnP approach), such that even the small improvements in table 3 of the supplementary material need to be viewed with care. Minor aspects: - In 225 the author decide to do backtracking line search to avoid estimating the NoLip constant, but would backtracking on the (differentiable but not L-smooth) data fidelity term considered here not work in the L2 case? - The authors mention that there is no result on the convergence of the PDHG method for nonconvex regularizers (line 81). As I was curious, I briefly searched online and found "Precompact convergence of the nonconvex Primal–Dual Hybrid Gradient algorithm" by Sun et al., Journal of Computational and Applied Mathematics, 2018. Precompactness of the primal variable (the image) would be easy to ensure if one restricts every value to [0,1]. Is their result applicable? (Honestly, I have not read the paper yet). - The condition $\lambda L_f <1 $ in Theorem 2 seems to limit the amount of data fidelity one can use in order to still have a convergent algorithm - is this a limitation? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Considering a difficult (2-stage) optimization with different parameters to avoid bad minimizers as shown in Fig. 5 of the supplement, a lack of strict convergence guarantee as the convexity condition cannot be enforced, and negligible difference to a plain PnP or RED approach in terms of the PSNR, what is the advantage of the proposed method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I don't think there is a potential negative societal impact of this work. In terms of general limitations, I think the authors should be more open about the fact, that the use of the Bregman distance framework did not result in improved results over prior algorithmic schemes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses:** - We do no expect the B-DRUNet denoiser to outperform the DRUNET denoiser. Indeed, both are based on the same architecture but the former is additionally constrained to take the specific form (25). We are thus satisfied with the fact that B-DRUNET almost reach the performance of DRUNET (difference of ~0.05 dB). - We recognize that in terms of numerical performance, our algorithms compare with existing methods. Yet, our method stands alone in offering guarantees of convergence. We believe that the combination of comparable performance to these methods along with our convergence guarantees holds great promise. In fact, we have introduced the first convergent and effective technique for Poisson image restoration using a nonconvex and deep prior. - Indeed, the convexity condition (26)-(27) is not hard-coded in the network architecture. However, it is naturally verified in practice by the trained denoiser. Please refer to the global rebuttal to all reviewers, where we discuss and plot, in the attached pdf, empirical verifications of the convexity hypothesis. We note a clear confirmation of the condition (26)-(27), even when evaluated far from the image manifold. We also prove in the rebuttal to all reviewers that the assumption is **verified by the true MMSE denoiser**. By minimizing the $L^2$ cost, the denoiser is actually trained to approximate this MMSE denoiser. This explains why our denoiser naturally satisfies the convexity condition without necessitating supplementary constraint. In comparaison, as explained in the introduction, most of the studies in the literature on the convergence of PnP methods assume *nonexpansive denoisers*. However, the **true MMSE denoiser is not nonexpansive**. Consequently, it is observed in different works [1,2,3] that a network trained to denoise without additional constraints is *not nonexpansive*. Furthermore, ensuring nonexpansivity of a denoiser while training is often achieved through the use of soft penalties or approximations (which also lack explicit guarantees) and significantly degrades denoising performance. - On the numerical side, we agree that the adopted two-step process is a limitation of our approach. The learned regularizer being nonconvex, the algorithms can be sensitive to initialization and for strong degradations the algorithm might not converge towards to right critical point if not initialized properly. - For Poisson noise, we can show that the data-fidelity term does not verify any NoLip condition (4) w.r.t the L2 metric. However, with Proposition 4 (Appendix D), the backtracking strategy is provably guaranteed to find a stepsize such that the objective function decreases, **provided** that a NoLip is satisfied for some $L>0$. Indeed, Proposition 4 is based on the sufficient decrease property (81) derived from the NoLip condition. Thus, in the L2 case, adding backtracking does not lead to a converging scheme. - We are indeed aware of the result from Sun et al. Even though very interesting, the precompactness is however not easy to enforce for the PDHG algorithm. Indeed, hard constraining the values of one variable in [0,1] amounts to add a proximal step (a projection) in the algorithm. In our B-RED algorithm, when adding the hard constraint $i_C$ (18) to the Bregman Gradient Descent algorithm (17), we still fit the general BPG algorithm (14). However, adding a proximal step in the PDHG algorithm does not correspond to some known algorithm. Note that one could prove that it is possible to ensure precompactness and thus convergence of nonconvex PDHG if we add the assumption that $f$ has Lipschitz gradient. Such a convergence result would indeed makes sense when compared to the recent convergence results of ADMM/DRS in the nonconvex setting that require one function to have Lipschitz gradient [4]. However, here for Poisson deblurring, $\nabla f$ is not Lipschitz and this is not applicable. - The constraint on $\lambda$ is indeed a limitation. Thanks for pointing this out. It is due to the fact that the denoiser writes as a proximal step *with stepsize $1$*. We are thus forced to keep a fixed stepsize $\tau=1$. The BPG condition for convergence $\tau \lambda L_f < 1$ then becomes $\lambda L_f < 1$ i.e. a constraint on $\lambda$. As a future work, we plan to explore solutions for relaxing this constraint, for instance by using a different algorithm than BPG with a loosen stepsize constraint. **Questions:** Compared to other Poisson image restoration methods, the main advantage of our algorithms is their convergence. First and foremost, the convexity condition is required for the convergence of the B-PnP algorithm but not for the convergence of B-RED. B-RED has then strict convergence guarantees. In addition, for the convergence of B-PnP, we refer to our comment above on the practical and semi-theoretical verification of the convexity condition. Furthermore, although our experiments focus on Poisson Inverse Problems, the core contribution of our work lies in its theoretical advancements. We present the first Bregman extension within the plug-and-play and Regularization-by-Denoising frameworks, supported by a strong theoretical foundation. This encompasses novel convergence findings for the Bregman Proximal Gradient algorithm (see Appendix D.1) and a new characterization of Bregman proximity operators (see Appendix C). These comprehensive theoretical outcomes extend beyond our immediate plug-and-play objectives, holding potential for diverse applications. [1] Hertrich et al. "Convolutional proximal neural networks and PnP algorithms". In Linear Algebra and its App., 2021. [2] Pesquet et al. "Learning maximally monotone operators for image recovery" SIAM J. on Im. Sc., 2022. [3] Hurault et al. "Proximal denoiser for convergent PnP optimization with nonconvex regularization". ICML 2022. [4] Themelis & Patrinos. "DRS and ADMM for nonconvex optimization: Tight convergence results." SIAM J. on Optim, 2020. --- Rebuttal Comment 1.1: Title: Please make limitations clear in the revised version - otherwise: thanks, nice work! Comment: Dear authors, thanks a lot for the detailed answers and the new section on the limitations! I think the restriction on $\lambda$ could be mentioned explicitly as well. Just to make sure: The constraint on $\lambda$ was respected in the experiments, right? Otherwise, this crucially needs to be pointed out. --- Reply to Comment 1.1.1: Comment: Thanks for the advise. We will update the limitation paragraph to mention the restriction on $\lambda$ for the B-PnP algorithm. The constraint $\lambda L_f<1$, which is specific to the B-PnP algorithm (B-RED converges without such constraint) , may not be respected in our experiments. The best estimation of a global NoLip constant $L_f$ for the Poisson data-fidelity term we could get is $L_f=||y||_1$ (see Appendix E.3). However, for an image, the value $||y||_1$ is large and the restriction on $\lambda < \frac{1}{||y||_1}$ leads to extremely small $\lambda$ values. This approximation of a *global* NoLip constant $L_f$ can be *locally* very lose. In particular, the majoration (127) used for estimating this constant, is, for most images, way over-estimated. In order to still guarantee the convergence, as mentioned Appendix E.3, we adopt the following backtracking-like strategy to adjust the reguralization parameter $\lambda$: - Choose an initial value for $\lambda > 0$ - At each iteration $k$ of the B-PnP algorithm. Check sufficient decrease of the objective function $F_{\lambda,\gamma} = \lambda f + \phi_\gamma$ i.e. $F_{\lambda,\gamma}(x_{k}) - F_{\lambda,\gamma}(x_{k+1}) < \delta D_h(x_{k+1},x_k)$ If at some iteration, this condition is not satisified before convergence, we alert the user and restart the algorithm with $\lambda \longleftarrow \eta \lambda$. We also let the user know that, for optimal performance, it might be necessary to adjust the regularization parameter $\gamma$ of the denoiser, in order to compensate for this decrease of $\lambda$ With the proposed default value of $\lambda=0.025$, over the variety of blur kernels and noise levels experimented, the sufficient decrease property was always verified and this backtracking algorithm was never activated. This illustrates that $||y||_1$ is a bad approximation of the NoLip constant. In order to clarify this point we propose to include the previous discussion in a new Appendix and add the following paragraph in the main paper : In our experiments, the constraint $\lambda L_f<1$ of the B-PnP algorithm may not be respected. The *global* NoLip constant $L_f$ can indeed be *locally* very lose. As explained in the Appendix, we can adopt a backtracking-like strategy on the regularization parameter $\lambda$ to ensure convergence. Nevertheless, with the proposed default value $\lambda=0.025$, this backtracking algorithm was never activated over the variety of blur kernels and noise levels experimented."
Summary: This paper develops a Bregman Plug and Play image restoration algorithm for solving under-determined inverse problems in the presence of Poisson measurement noise. The framework trains the image denoising algorithms used within PnP iterations (the "Bregman Score Denoiser") to remove noise with an exponential distribution that depends on the distribution of the measurement noise (as opposed to the Gaussian noise used for training previous PnP denoisers). The paper integrates the proposed denoiser into iterative algorithms to form B-PnP and B-RED. Both algorithms effectively deblur images in the presence of Poisson noise and are provably convergent to fixed points. Strengths: -Generally well-written -Well motivated -I believe technically sound -Evaluated on several different blur kernels, including those based on camera shake. (Blur kernels were known) Weaknesses: -Based on table 3 in the appendix, the proposed method does not meaningfully improve performance over existing methods (though it does come with convergence guarantees) -Comparisons with existing methods are not particularly comprehensive -In practice, PnP algorithms are quite sensitive to how hyperparameters are chosen and setting them correctly can be a challenge. This algorithm introduces another parameter, gamma. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Typos Line 90: "see also a review Kamilov et al." --> "see also a review by Kamilov et al." Line 197: Puting (i) on a separate line would read cleaner Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - We recognize that in terms of numerical performance, our algorithms compare with existing methods. Yet, as you correctly pointed out, our method stands alone in offering guarantees of convergence. We believe that the combination of comparable performance to these methods along with our convergence guarantees is a relevant novelty. In fact, we have introduced the first convergent and effective technique for Poisson image restoration using a nonconvex and deep prior. Furthermore, although our experiments focus on Poisson Inverse Problems, the core contribution of our work lies in its theoretical advancements. Our theoretical study contains novel convergence findings for the Bregman Proximal Gradient algorithm (see Appendix D.1) and a new characterization of Bregman proximity operators (see Appendix C). These comprehensive theoretical outcomes extend beyond our immediate plug-and-play objectives, holding potential for diverse applications. - Thanks for your feedback on the comparison section. In the updated version of the publication, we propose to bring more details when decribing the compared methods, as follows: a) PnP-PGD corresponds to the standard plug-and-play proximal gradient descent algorithm in the Euclidean geometry $$ x_{k+1} = D_\sigma \circ (Id - \tau \nabla f)(x_k) $$ where $f$ is the Poisson data-fidelity term (2). The plugged denoiser $D_\sigma$ is the DRUNet network (i.e. the same architecture than the one used for parametrizing our denoiser) but now trained to denoise **Gaussian** noise of std $\sigma$. We train $D_\sigma$ simultaneously for all noise levels $\sigma \in [0,50]$ with the $L^2$ loss. As $f$ does not have Lipschitz gradient, this PnP-PGD algorithm does not have convergence guarantees. b) PnP-BPG realizes the same iterations than our B-PnP algorithm (20) $$x^{k+1} = D_\sigma \circ \nabla h^*(\nabla h - \tau \nabla f)(x_k)$$ but our Bregman Score Denoiser in (20) is replaced by the more classical Gaussian denoiser $D_\sigma$ introduced above. This scheme does not have guarantees of convergence as $D_\sigma$ is not a Bregman proximity operator anymore. For both PnP-PGD and PnP-BPG the parameters $\sigma$ and $\tau$ are optimized for each noise level by grid-search. c) ALM Unfolded [3] uses the Augmented Lagrangian Method for decoupling linear operator $A$ from the data-fidelity term. They derive a 3-step algorithm that is unfolded and trained for specific degradations. In particular, it is trained for image deblurring with a variety of blurs and noise levels. The publicly available model being trained on grayscale images, for restoring our color images, we treat each color channel independently. - Concerning the hyperparameters involved in our algorithms, the parameter $\gamma$ is the noise level of the denoiser. $D_\gamma$ is trained to denoise images degraded with noise parameter $\gamma$. It is the equivalent of the parameter $\sigma$ commonly used by PnP method with Gaussian denoisers. Thus, compared to standard PnP algorithms, there is no additional hyperparameter involved. Actually, we even have one hyperparameter less as the stepsize of the algorithm has not to be chosen: for B-RED it is automatically tuned with a backtracking line-search strategy, and for B-PnP it has to be set to $1$. We agree on the fact that hyperparameters selection is a common difficulty in variational image restoration. Given an input image, automatic parameter tuning strategies have been developed and could be added to our framework. For example, [1] proposes a Bayesian method for setting the regularization parameter and [2] employs deep reinforcement learning to train a policy network yielding well-suited parameters for PnP. - In the global rebuttal to all reviewers, we propose a limitation paragraph which will be added at the end of the main paper. [1] Vidal et al. "Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems: An empirical bayesian approach part i: Methodology and experiments.", 2020 [2] Wei et al. "Tuning-free plug-and-play proximal algorithm for inverse imaging problems". In ICML 2020. [3] Sanghvi et al. Photon limited non-blind deblurring using algorithm unrolling. IEEE Transactions on Computational Imaging, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The proposed comparisons with existing methods will improve the paper.
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their careful reading of our submission, and their helpful comments and suggestions. In each individual rebuttal, we tried to answer to all the objections raised by the reviewers. We give here more general remarks about aspects that multiple reviewers have highlighted. **Regarding the hypothesis of convexity of $\psi\_\gamma \circ \nabla h^\star(x)$ in Proposition 1** - We first proceed to a detailed experimental validation of this assumption (find the plots in the attached pdf): After training the Bregman Score Denoiser $\mathcal{B}\_{\gamma}(y) = y - y^2 \nabla g\_\gamma(y)$ to denoise Inverse Gamma noise at different noise levels $\gamma$, we verify the convexity of $\eta_\gamma : x \to \psi_\gamma \circ \nabla h^\star(x) = \psi_\gamma\left(\frac{-1}{x}\right)$ on $int\ dom(h^\star) = \mathbb{R}^n\_{--}$ where $\psi_\gamma$ is defined in (11). In appendix E.2, we showed that for $z \in \mathbb{R}^n\_{--}$ and $y = - 1/z$, $\forall d \in \mathbb{R}^n$, $\bigl \langle \nabla^2 \eta_{\gamma}(z) d,d \bigr \rangle = \sum_{i=1}^n \left( y^2(1 - 2y \nabla g\_\gamma(y)) \right)_i d_i ^2 - \bigl \langle Diag(y^4)\nabla^2 g\_\gamma(y)d, d \bigr \rangle = \mathcal{C}\_\gamma(y, d)$ To confirm the convexity assumption around the image manifold, we need to verify that the above quantity is positive for any image $y$. We represent, **in the attached pdf** $c(\gamma, \xi) = \min_{y_\xi, d} \biggl \langle \nabla^2 \eta_{\gamma}\left(\frac{-1}{y_\xi}\right) d,d \biggr \rangle = \min_{y_\xi, d} \mathcal{C}\_\gamma(y_\xi, d)$ The minimum is taken over $\\{y_{\xi}\\}$ and $\\{d\\}$ where - Given $\\{ x_i \\}$ the CSBD68 testset of clean and natural images, $y_\xi$ is obtained from $x \in \\{x_i\\}$ by interpolating between $\hat y_\xi \sim p_\xi(y|x)$ a noisy version of $x$ and $\mathcal{B}\_{\gamma}(\hat y_\xi)$ via $y\_\xi = \alpha \hat y\_\xi + (1-\alpha) \mathcal{B}\_{\gamma}(\hat y\_\xi)$, where $\alpha \sim \mathcal{U}[0,1]$ . We use different noise models $p_\xi$ and noise levels $\xi$. This enables to explore the space around the image manifold. - For each test image, $100$ random vectors $d$ are sampled from $\mathcal{U}[0,1]^n$. In the figures, we represent $c(\gamma, \xi)$ w.r.t: - $x$-axis: the $\gamma$ parameter of the denoiser. - $y$-axis: the noise level $\xi$ in the input image. Figure (a), the noise model is Inverse Gamma noise $p_\xi(y|x) = \mathcal{IG}(\xi-1, \xi x)$ i.e. the noise model used for training the denoiser. Figure (b), the noise model is Poisson $p_\xi(y|x) = \mathcal{P}(\xi x)$, i.e. the noise model on which our plug-and-play algorithms are evaluated, and Figure (c), the noise model is Gaussian $p_\xi(y|x) = \mathcal{N}(x,\xi^2 Id)$. We observe that for all $\gamma$ seen during training, and even far away from the image manifold (i.e. for large noise level) we have $c(\gamma, \xi) > 0$ by a large margin, i.e. the convexity condition of Proposition 1 is clearly verified. These plots will be added to the appendix of the paper. - We also provide a theoretical argument supporting the assumption: Our denoiser is trained by minimizing the $L^2$ cost for which the optimal Bayes estimator is the MMSE. The MMSE is then a theoretical denoiser that our denoiser tries to approximate. **We can show that the proposed assumption ($\psi\_\gamma \circ \nabla h^\star(x)$ convex) is verified by the MMSE denoiser.** This explains why our denoiser, after training, naturally satisfies this condition without necessitating supplementary constraints. This result is proven in [1, Lemma A.1] in the Euclidean case, we here extend the proof for Burg's entropy Bregman potential. We summarize the proof here without including the details of the calculus. We use the same notations as above. With $h(x) = \sum_{i=1}^n \log(x_i)$ Burg's entropy, it consists in showing that, for $g_\gamma(y) = - \frac{1}{\gamma} \log p_Y(y)$ (i.e. when the denoiser is the MMSE) we have $\forall z \in \mathbb{R}^n_{--}, \langle \eta_\gamma(z) d,d \rangle \geq 0$. We showed in Appendix E.2 that $\forall z \in \mathbb{R}^n_{--}$ and $y = \nabla h^*(x) = -\frac{1}{z}$, $\nabla^2 \eta\_\gamma(z) = Diag \left(y^2(1 - 2y \nabla g\_\gamma(y)) \right) - Diag(y^4) \nabla^2 g\_\gamma(y)$ We consider the single variable case $n=1$. After differentiating $g_\gamma$ twice, we get $\eta_\gamma''(z) = \frac{y^2}{\gamma p_Y^2(y)} \left( \gamma p_Y^2(y) + 2y p'_Y(y)p_Y(y) - y^2[p'_Y(y)]^2 + y^2 p''_Y(y)p_y(y) \right)$ Recall that for Burg's entropy, the Bregman noise model writes $p(y|x) = \alpha(x) \prod_{i=1}^n (y\_i)^{-\gamma} \exp\left(- \gamma \frac{x\_i}{y\_i}\right)$. Differentiating $p_Y(y)= \int\_z p(y|z)p\_X(x)dx$ twice, after simplification we get $\eta_\gamma''(z) = \frac{\gamma}{p_Y^2(y)} \int_x \int_{x'} \frac{(x-x')^2}{2} p(y|x)p(y|x')p(x)p(x')dxdx' \geq 0$ The extension to $n>1$ is straightforward, where $(x-x')^2$ becomes $\langle x-x', d \rangle^2$. The detailed proof will be added to the appendix of the paper. **Regarding the limitations, this section will be added in the main paper**: The central significance of our work stems from its theoretical study but we recognize certain limits within our experimental results. First, when applied to deblurring with Poisson noise, our proposed algorithms do not outperform existing methods in terms of PSNR. Second, while we prove that B-RED is convergent without restriction, the convergence of B-PnP depends on a specific convexity condition. Despite being confirmed with experiments and having robust theoretical foundations, this assumption could potentially be not verified when applied to non-natural images that significantly differ from those in the training dataset. Finally, due to the nonconvex nature of our proposed prior, the practical performance of the algorithms can be sensitive to their initialization." [1] Gribonval, "Should penalized ...." IEEE Trans on Signal Proc., 2011. Pdf: /pdf/8d3bdd371d6fe8ea48149fa6aadc90e90a8c0525.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference
Accept (poster)
Summary: This paper presents a variational framework for approximating neuro-symbolic inference. It addresses the weighted model counting (WMC) and most probable explanation (MPE) problems by introducing prediction and explanation models. Training techniques, such as output space factorization, regularized Dirichlet prior, and joint matching loss are discussed. To improve training efficiency and ensure logical constraint satisfaction during testing, a symbolic pruner is also proposed . Experimental evaluations demonstrate scalability and performance across three neuro-symbolic tasks. Strengths: - The paper is well-written and easy to understand, with a clear statement of the problem and methodology. - The framework is simple and scalable, which may inspire further improvements in future work. - Some training techniques are interesting, particularly the joint matching loss, which is related to GFlowNet. Weaknesses: - The authors have carefully discussed some weaknesses in Section 6. - The motivation behind the prediction model and explanation model needs to be clarified. - It is unclear why the explanation model is necessary. From my understanding, only the prediction model is needed for neuro-symbolic training: we just need to train the prediction model using Eqn. (7), and then train the neural network using Eqn. (4); or repeat these two steps iteratively. - Another issue is the necessity of the prediction model. First, the prediction model is introduced to approximate weighted model counting, but approximate counting or sampling of $\mathbf{w}$ is still required during its training. Second, in Line 192, the authors report that the prediction model cannot perform better than using symbolic prediction. - Some techniques are so specific that they appear to be designed only for the MNISTAdd task. For instance, as mentioned in Appendix G, the symbolic pruner is highly efficient for MNISTAdd task due to its simple mathematical structure. However, for more complex tasks, a SAT solver must be used, and the number of solver calls is proportional to the problem dimension, which is not acceptable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Should $\phi \leftarrow \textbf{Algorithm 2}(\textsf{beliefs}, \phi)$ be changed to $\textbf{Algorithm 1}$? - Does the framework have requirements for background knowledge? For example, should it be presented in the form of CNF? - Is it difficult to design and train the prediction model, given that it involves the approximation of the symbolic reasoning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the limitations of the approach have been presented. The work does not have negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and appreciate their support of the paper's writing and the framework's simplicity. The reviewer would like a clarification on the motivation behind the prediction and explanation model, which we provide below. > It is unclear why the explanation model is necessary. From my understanding, only the prediction model is needed for neuro-symbolic training: we just need to train the prediction model using Eqn. (7), and then train the neural network using Eqn. (4); or repeat these two steps iteratively. The reviewer is correct: The explanation model is indeed optional. The explanation model approximates the posterior, which can 'explain' a decision with the $\mathbf{w}$ that is most likely for that decision. As mentioned, interleaving Eq 7 and 4 (the prediction-only variant) is enough to train the perception model in A-NeSI: This is 'A-NeSI (predict)' in Table 1, and Tables 2 and 3 also use this variant. We will update the paper to emphasise this point. > Another issue is the necessity of the prediction model. The prediction model is necessary for A-NeSI and is why our method scales while exact inference does not. We will answer both arguments separately: > First, the prediction model is introduced to approximate weighted model counting, but approximate counting or sampling of $\\mathbf{w}$ is still required during its training. We do not perform approximate counting during the training of the prediction model. Instead, we indeed sample $\mathbf{w}$ to train it (see Equation 7). However, this sampling step is fast: To train the prediction model, we do not sample from the posterior $p(\mathbf{w}|\mathbf{y}, \mathbf{P})$ (expensive, requires counting in the normaliser) but from $p(\mathbf{w}|\mathbf{P})$ (fast, does not require normalising. It is just a single `torch.multinomial` call). Therefore, we bypass expensive counting or sampling while training the prediction model. Furthermore, evaluating $c(\mathbf{w})$ is also usually fast. We will highlight this efficiency in the paper. > Second, in Line 192, the authors report that the prediction model cannot perform better than using symbolic prediction. The reviewer is correct that during _testing_, the prediction model can not perform better than symbolic prediction in our tasks. However, the prediction model is still necessary: We need it to (efficiently/scalably) _train_ the perception model (Equation 4). However, we can throw the model away after training and use symbolic prediction for maximum accuracy. We will clarify this in the text. > Some techniques are so specific that they appear to be designed for the MNISTAdd task. For instance, the symbolic pruner is highly efficient for MNISTAdd task due to its simple mathematical structure. However, for more complex tasks, a SAT solver must be used, and the number of solver calls is proportional to the problem dimension, which is not acceptable. The symbolic pruner is very flexible in its design: While in the general case, we can always find a _perfect_ symbolic pruner with SAT-solving (which is, as you mention, quite expensive), we can often design fast imperfect pruners that still help A-NeSI. For instance, we used a fast, imperfect pruner in the Path Planning task to ensure the prediction model always returns a path (see Appendix I.3.1). We called those 'local constraints' in Appendix G.3. In hindsight, imperfect constraints may be more apt. Furthermore, as discussed in Appendix G.4, one could _learn_ an (imperfect) symbolic pruner. We will update Appendix G to clarify the possibilities of the design of the symbolic pruner. > Does the framework have requirements for background knowledge? For example, should it be presented in the form of CNF? Section 2.1 (problem components) defined the background knowledge as a deterministic black-box function $c$ between discrete input and output variables. Therefore, the two requirements are that $c$ is deterministic and that the domain and co-domain are discrete (although extensions to continuous variables are possible). The background knowledge does not have to be in a specific form (like CNF), but knowing about the form may help us design output factorisations and symbolic pruners. We will clarify this point in the background section. > Is it difficult to design and train the prediction model, given that it involves the approximation of the symbolic reasoning? This paper used simple MLPs for the first two experiments and a standard ResNet for the second. The training of the prediction models went smoothly for most experiments but the most challenging one (the 30x30 path planning). We trained on this problem for 44 hours and observed that the loss decreased slowly and still needed to converge. Does this generalise to more complex settings? There has been active research into whether neural networks can learn reasoning tasks, and the main consensus is positive as long as we evaluate the network in-distribution. We are not interested in out-of-distribution generalisation in A-NeSI, especially if the prior gives a good match. Furthermore, A-NeSI gives ample opportunity to design the prediction model with any neural network. In particular, GNNs will often be a good choice. > Should Algorithm 2 be changed to Algorithm 1? This is indeed a typing mistake. We will correct this in the revised version. Thanks! --- Rebuttal Comment 1.1: Title: Reply Comment: Thank you for clarifying my concerns. However, I am still struggling to fully understand the motivation behind the prediction model. I know that $\mathbf{w}$ is sampled from $p(\mathbf{w} | \mathbf{P})$, and that evaluating $c(\mathbf{w})$ is often fast. However, finding a sample $\mathbf{w}$ that satisfies $c(\mathbf{w}) = \mathbf{y}$ can be difficult due to sparsity [1], and I think this is the main efficiency bottleneck. Could the authors elaborate more about how to address this sparsity issue? [1] Li, Q., Huang, S., Hong, Y., Chen, Y., Wu, Y. N., & Zhu, S. C. (2020, November). Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for elaborating on their concern. The reviewer is absolutely correct that finding a sample $\mathbf{w}$ for which $c(\mathbf{w})=\mathbf{y}$ is very difficult for a fixed $\mathbf{y}$. However, 1. Our algorithm (Algorithm 1 in the paper) does not search for a $\mathbf{w}$ that satisfies this constraint. Rather, we sample _any_ $\mathbf{w}$, evaluate $\mathbf{y}'=c(\mathbf{w})$ to see what output we get for this particular $\mathbf{w}$, and train the prediction network to learn this mapping. This loop is fast, and does not require any search. 2. However, the reviewer is right that for the complex problems one will rarely sample a $\mathbf{w}$ that gives *exactly* the fixed $\mathbf{y}$ that we need. This is why we use the output factorization: indeed, the probability of sampling a $\mathbf{w}$ such that the output sum is exactly 362773882637293772 is extremely low. But sampling $\mathbf{w}$s for which some of the digits match is likely (around 1/10 for each digit). This allows us to train the prediction model efficiently. 3. Our experiments show that, with this loop, we can learn an approximate weighted model counter that is accurate enough to learn the perception model in complex problems. These problems could not be solved with other weighted model counting methods. We hope this addresses the concern and are available for further clarifications.
Summary: This paper introduces a variant of Probabilistic Neurosymbolic Learning that uses neural networks for approximate inference (vs. prior exponential-time exact inference). The efficacy of this approach is well-supported by various experiments. Strengths: - The paper is well-written and overall quite clear (i.e., the paper is high-quality in general). The goals are clearly stated. For example, Section 3 starts with "Our goal is to reduce the inference complexity of PNL." - The experiments are described thoughtfully with good visualizations and tables. - Using MNISTAdd as a running example throughout was a great idea. I think it added a lot. - The authors are studying a significant problem -- i.e., that of neurosymbolic learning -- which feels especially timely. - It is a bit harder for me to comment on originality. To the best of my knowledge, this work is rather original, but there might be other related prior works not mentioned by the authors of which I'm unaware. Weaknesses: My main concerns are about the notation. I think it could be improved, as discussed here and in the Questions section. - See questions below. - Beyond that, some of the notation is a bit clunky, particularly the double subscripts -- e.g., $q_{\phi_p}$. - Relatedly, it seems like some of these subscripts are a bit inconsistent: - there's a p on the RHS of equation (2) but no p on the LHS of equation (3) - why? - in Section 3.3, there's both $\phi$ and $\phi_e$ seemingly inconsistently - more related questions below Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The subscripts of $\textbf{P}$, the belief, seem a bit odd and not well-defined. Is $\textbf{P}$ a matrix? The subscript notation $_{w_i}$ seems especially odd/undefined. It would be good to make this more precise. - Is it correct that, once you know $\textbf{P}$, you can just read off $p(\textbf{w}|\textbf{P})$ directly from $\textbf{P}$? If so, it seems like $p(\textbf{w}|\textbf{P})$ obscures this. Is there an alternative way of writing that to make it more explicit, like $f_P(w)$? (I don't love that notation, either - but I suspect there's a way to make this clearer. What do you think?) - Is making the $\phi$ explicit in the notation accomplishing that much? My sense right now is that it's just obscuring the notation. Am I missing something? - $k_W$ , $k_Y$ -- "discrete choices" maybe needs a bit more explanation. It's unclear at first whether this is the number of choices *within* $W$ -- i.e., $|W|$ -- or how many choices are drawn from $W$. Maybe specifying $k_W$ in the example -- e.g., $k_W = 2$ in the example -- would be helpful. Same for $k_Y$. Also, maybe this is the right place to introduce the notation $w_i$ for $i \in \{1,\dots,k_W\}$ and specify the spaces that $w_i$ and $y_i$ live in. If for some reason giving these space symbols is not needed or unnecessarily complicated, could you please explain? It seems more grounded to give them symbols. - nit: should equation (5) have an indicator? (dropping it feels a bit too informal) - The symbolic pruner notation feels off to me. Is there a separate $s$ for each $i$? Do $q,s$ depend on $i$ in $q \cdot s$? It seems like it should based on $s = s_i(\cdot)$, but it seems like it shouldn't if they're vectors (i.e., what are the dimensions of $s,q$?). - Is $s_i$ given or learned? - "In our studied tasks, neural prediction cannot perform better than symbolic prediction." But sudoku shows slightly better neural prediction than symbolic prediction. It would be good to comment on this. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do a nice job outlining various limitations of this work. I found the presentation honest and straightforward. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their supportive comments. We appreciate the detailed feedback on notation and useful questions, which we discuss below. > Double subscripts on parameters are clunky, and their use is inconsistent You are correct. We will update this and make it more consistent throughout the text, likely by having different parameters $\mathbf{\phi}$ and $\mathbf{\psi}$ for the prediction and explanation models. We will also update the inconsistencies throughout. > The subscripts of the belief need to be more well-defined. Is it a matrix? Agreed, we will do this. $\mathbf{P}$ is not quite a matrix. It is a list where each element is in some simplex. If $\Delta_n$ is the n-dimensional simplex, then $\mathbf{P}_i\in \Delta\_{k\_{W\_i}}$, where $k\_{W\_i}$ is the number of categories in the $i$th variable of $W$. Therefore, each $\mathbf{P}_i$ represents a categorical probability distribution over $k\_{W\_i}$ options. It is like a matrix where the length of each row is variable, but this subtlety would make the notation dense, hence why we treat it informally as a matrix-like object. We will play around with this. Thanks! > The "discrete choices" need more explanation. Clarify their use in the example. Good question, and related to our answer from above. In an earlier version, we defined these formally, but we dropped this later to this for brevity, since the exact definition of the spaces is not used in the paper. We will reintroduce these definitions to balance these concerns. To answer the reviewer's question: Each $w_i\in \{1, ..., k_{W_i}\}$ is a categorical random variable. Here $k_{W_i}$ is the number of categories in the $i$th variable, like above. Then $k_W$ is the number of discrete variables in the problem. > Can you read of $p(\mathbf{w}|\mathbf{P})$ directly from $\mathbf{P}$? Is there a less obscuring notation? This is correct - And yes, it does obscure it somewhat. One option is to use $\mathsf{Cat}(\mathbf{w}; \mathbf{P})$ to emphasise it is a multivariate categorical distribution, but this is more verbose. We will play around with some options, thanks for the suggestion! > Is making the parameter $\phi$ explicit accomplishing much? We wanted to emphasise that $q$ is a neural network with some parameters that change during training, especially in the two algorithms. We will give this some thought, but will likely keep it. > nit: should equation (5) have an indicator? (dropping it feels a bit too informal) Correct, we (informally) dropped the indicator. We will reintroduce it. > The symbolic pruner notation feels off Looking back on this equation, we agree. We defined $\mathbf{s}$ as a $k\_{W\_i}$-dimensional vector, but (as you mention) it is much clearer to define $s$ as a function. Then the definition is $q\_{\phi\_e}(w\_i=j|\mathbf{y}, \mathbf{w}\_{1: i-1}, \mathbf{P})=\frac{\hat{q}\_{\phi\_{E}}(w_i=j|\mathbf{y}, \mathbf{w}_{1: i-1}, \mathbf{P}) s\_{\mathbf{y}, \mathbf{w}\_{1: i-1}}(w_i=j)}{\sum\_{j'=1}^{k\_{W\_i}} \hat{q}\_{\phi\_{E}}(w_i=j'|\mathbf{y}, \mathbf{w}\_{1: i-1}, \mathbf{P}) s\_{\mathbf{y}, \mathbf{w}\_{1: i-1}}(w\_i=j')}$ (where $s\_{\mathbf{y}, \mathbf{w}\_{1: i-1}}: W\_i \rightarrow \\{0, 1\\}$). > Is the symbolic pruner $s$ given or learned? In our experiments, we give $s$. In Appendix G, option 4 we describe an extension where $s$ is learned. > "In our studied tasks, neural prediction cannot perform better than symbolic prediction." But sudoku shows slightly better neural prediction than symbolic prediction. It would be good to comment on this. The variance of both of these results is rather high. We believe this discrepancy simply comes from a lack of statistical power, and both prediction methods perform about equally. We will mention this. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions and committing to incorporating some of the proposed fixes! I maintain my score.
Summary: The paper presents a polynomial time solution to the approximate neurosymbolic inference problem for probabilistic neurosymbolic learning problems. The approach is based on a variant of predictive processing, with a prediction model and an explanation model. The results are interesting, and successfully solve three different non-trivial challenge problems: MNISTAdd, Visual Sudoku, and Warcraft Visual Path Planning. Strengths: + The paper is an interesting attempt at bringing predictive processing concepts to neuro symbolic reasoning using probabilistic programs. + The idea of output space factorization is useful, as it helps reduce the complexity of the problem. However, it seems that humane effort may be needed to decompose the output in a helpful manner. + The design of belief priors using high-entropy Dirichlet distributions to cover all possible input scenarios or combinations of the factorized space is an interesting observation. + The pseudocode in Figure 2 is clear and makes the underlying algorithm more easily reproducible. Weaknesses: - It is unclear how these experiments in controlled settings like MNISTAdd relate to robust neuro symbolic reasoning in more exciting tasks, like image classification or activity recognition. For example, factorization in such settings may not be easy. - The paper has no discussion about how this approach is related to predictive processing. Perhaps, a comparison would be helpful to the readers of the paper. - There are no bounds on the approximations being produced by this approach. How do the various design choices impact the quality of the approximation being obtained? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Does this approach lead to an improved algorithm or heuristic for weighted model counting in general? Thanks for the response in the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations listed in the paper are not clear. For example, it is noted that A-NESI did not perfectly train the prediction model in more challenging problems. It would be helpful to make this precise: what is the problem, what makes it challenging, and what was the quantitative performance of A-NESI? If it is one of the problems studied before, perhaps that section can be cited with an explanation. The responses from the authors are interesting. I have updated my score to reflect my view and understanding of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and interesting questions. > It is unclear how these experiments in controlled settings like MNISTAdd relate to robust neuro symbolic reasoning in more exciting tasks, like image classification or activity recognition. For example, factorization in such settings may not be easy. In the settings mentioned by the reviewer, we have background knowledge of what neural network outputs are allowed. For instance, consider scene graph generation. There, we may have rules like $\forall x, y: \phi(x, y)$. We have a natural decomposition over the universal quantifier: For each rule and every pair of objects, define a boolean variable with whether the proposition holds. This factorisation is rather large but should be easy to learn. We can also consider other heuristics, such as grouping per object and grouping by some rules. > The paper has no discussion about how this approach is related to predictive processing. We thank the reviewer for the suggestion. We are not experts in cognitive science and do not immediately see the suggested connection to predictive processing. However, the connection of the ideas of A-NeSI to cognitive science is an interesting option for future work. Of course, any suggested similarities from the reviewer are very welcome. > How do the various design choices impact the quality of the approximation being obtained? That is a good question. Our paper focused on empirically showcasing the strengths and scalability of our method, leaving approximation bounds for future work. However, we hypothesise these factors will make an impact: The expressiveness and architecture of the neural network, time of training, the divergence between the prior $p(\mathbf{P})$ and the evaluation distribution, and the complexity of the background knowledge. The latter include problem size, sparsity, connectedness, decomposability and structure. We highlighted some of these concerns in the limitations section and will give it another pass. Exactly how these relate is a complex question; we suspect no easy answer exists yet. There has been a significant amount of research into whether neural networks can learn reasoning tasks, in and out of distribution, and no clear and general answer exists. > Does this approach lead to an improved algorithm or heuristic for weighted model counting in general? Good question! Yes, we can use A-NeSI for general approximated WMC. However, we have yet to compare A-NeSI to existing approximation schemes: For this paper, our focus was on efficiently estimating the _gradient_ of the WMC, rather than on the accuracy of the WMC estimate itself. We hypothesise that the main benefit of A-NeSI comes from amortisation (like in variational inference): Training on many different values of $\mathbf{P}$ can be expensive but allows for rapid computation for a new value of $\mathbf{P}$. Furthermore, A-NeSI is very flexible in computation budgets: We can train for as many iterations as possible in our compute budget to improve our estimations. A downside of A-NeSI is the lack of guarantees on the approximation bounds, as the reviewer mentions. Future work could study this question. > The limitations listed are unclear. Make this precise: what is the problem, what makes it challenging, and what was the quantitative performance of A-NESI? The fact that the prediction model is not trained perfectly is made clear in the divergence between neural and symbolic predictions in Section 4.1 for $N=15$. If there is such a divergence, the most likely answer, according to the prediction model, is not the one reflected by the MAP sum. Here, the increased dimensionality made it harder for the neural network to learn the problem perfectly. We found similar issues in path planning, a much more complicated task in all three dimensions mentioned in our limitations section: We needed much longer training times to get good gradient estimates. We revised the paper to add these examples.
Summary: This paper introduces A-NeSi, a fast approximate procedure for NeSy architectures based on probabilistic logic. These architectures do not scale gracefully to problems involving many possible worlds, and the goal of A-NeSi is to scale them up. In short, A-NeSi employs a surrogate models (autoregressive neural nets) to replace the most computationally intensive steps. Experiments indicate a sizeable speed-up compared to SOTA probabilistic-logical approaches on a variety of tasks - from multidigit MNIST addition to pathfinding. Strengths: + Very clearly written. There's plenty of figures to build intuition, which is good. + Tackles a very prominent problem in neuro-symbolic AI - scalability of MAP inference for probabilistic-logical approaches, a major player in this field. + The idea of using surrogates is simple but very sensible. (What is not simple is ensuring the output satisfies the knowledge.) + All modeling choices are clearly motivated. + Empirical evaluation is both extensive and very promising. + Related work is both compact and comprehensive, well done. Weaknesses: No major deficiencies that I could find. This is a solid contribution. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1. How strongly does performance of A-NeSi depend on the complexity of the hard constraints/knowledge? Especially when knowledge encodes long-range interactions between variables (or, if you prefer, tree width). I would imagine beam search could choke up in these cases, and I'd appreciate some discussion on the expected behavior. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are clearly discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments. > How strongly does performance of A-NeSi depend on the complexity of the hard constraints/knowledge? Especially when knowledge encodes long-range interactions between variables (or, if you prefer, tree width). I would imagine beam search could choke up in these cases, and I'd appreciate some discussion on the expected behavior. Good question. We believe the complexity of the constraints matters significantly for the learnability of the prediction model. While we leave a proper study for future work, we predict the task's predictability, decomposability and degree of structure are important factors. Addition and sudokus are two tasks that are easy to decompose, making learning the prediction model relatively easy. Path planning is less decomposable and, as you mention, requires more long-range interactions. The prediction model took much longer to train for this task. The reviewer proposes the performance of beam search may degrade in challenging tasks. We do not currently see the bottleneck in the (test-time) beam search: In most neurosymbolic tasks, the perception model has low entropy after training, meaning there are few options to consider. For instance, [1] found that finding the MAP state is efficient in low-entropy distributions. [1] Manhaeve, R., Marra, G., & De Raedt, L. (2021). Approximate inference for neural probabilistic logic programming. In Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning (pp. 475-486). IJCAI Organization. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for answering my question. I would appreciate if the authors could mention their observations re. long-range interactions and low entropy/beam search performance in the paper. I remain of the opinion that this work is well worth accepting. --- Reply to Comment 1.1.1: Comment: Thanks for the comment. We agree, and will add it to the camera ready. We thank the reviewer for the suggestion.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and their reviews. The reviewers mention the clear writing and statement of the problem, motivation and methodology (`iEET`, `agwr`, `8FBf`), in particular, the use of the MNISTAdd example (`agwr`), the comprehensiveness of related work (`iEEt`), the description of the experiments (`agwr`) and the pseudocode (`xznW`). The reviewers also mention the importance of the problem A-NeSI tackles (`iEEt`, `agwr`). The reviewers discuss the simplicity of the method (`iEEt`, `8FBf`) with possibilities for future work (`8FBf`) while mentioning that A-NeSI contains several interesting (`xznW`, `8FBf`) and original (`agwr`) ideas. Furthermore, the reviewers note that the empirical evaluation is extensive and promising (`iEEt`), demonstrating scalability and performance (`agwr`, `8FBf`) in non-trivial problems (`xznW`). Reviewer `8FBf` asked for clarification on the motivation of the prediction and explanation models. We agree with reviewer `8FBf` that the explanation model is optional and discuss the benefits of also modelling the explanations. Furthermore, we clarify why training the prediction model does not require approximate counting or expensive sampling. Finally, while at _test-time_ the prediction model cannot be better than the symbolic prediction, we discuss why we need the prediction model to do scalable _training_. Reviewer `agwr` provides valuable in-depth comments about the notation, most of which we implement. In different forms, the reviewers `xznW`, `iEEt` and `8FBf` all ask how the techniques proposed in A-NeSI will generalise to other settings. We answer or clarify these questions in a separate response to the individual reviewers.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Expressive probabilistic sampling in recurrent neural networks
Accept (poster)
Summary: This paper studies circuit algorithms for sampling-based Bayesian inference in continuous-time rate-based recurrent neural networks. Specifically, it argues that using a linear readout from a noisy reservoir provides a substantial expressivity benefit relative to representing the sampled distribution directly in the recurrent population. Strengths: 1. The question of the minimal recurrent circuit architectures that allow sampling from complex distributions is clearly relevant to the neuroscience community at NeurIPS, and I think the main result is interesting (albeit unsurprising). 2. The experiments show to a reasonable degree that the proposed method can sample from non-trivial non-Gaussian distributions (MNIST). This is an improvement on most comp neuro papers on sampling, which largely focus on Gaussian distributions. My enthusiasm for this point is, however, dampened a bit by the fact that the networks here are rate-based and the authors aren't trying to optimize functionals of the dynamics (e.g. as in work from the Lengyel group), so it's not very surprising that this can be made to work. Weaknesses: 1. How does Proposition 1 differ from the results of Ma, Chen, and Fox, "A Complete Recipe for Stochastic Gradient MCMC" (NeurIPS 2015)? Perhaps I've missed something, but I do not see a novel contribution in Section 3.1. 2. The present manuscript does not address the speed of relaxation to the stationary distribution. The ability to obtain accurate samples quickly is a necessary feature of any model of sampling-based inference in the brain, and indeed the question of how to achieve fast sampling in biologically-plausible implementations has been a major focus of past work (e.g., Hennequin et al. 2014, Aitchison and Lengyel 2016, Echeveste et al. 2020, Masset et al. 2022). In my view, this is a key shortcoming of the learning algorithm proposed in Section 3.4: no attention is paid to whether the resulting network enjoys favorable convergence properties. Of course, one could modify the resulting drift and diffusion terms (as in Hennequin et al. 2014 or Masset et al. 2022), but this would require an additional optimization step. The authors should at least discuss this point; adding more experiments to probe this issue would make for a stronger paper. 3. In Lines 293-295 of the Discussion, the authors state that "Our preliminary experiments show that one-hidden-layer RSNs cannot readily approximate high-dimensional heavy-tailed distributions (e.g., those of overcomplete sparse coding representations [36])." If the authors have such results, they should show them. In particular, related to point (2) above, they should show how the convergence speed and number of reservoir neurons required to obtain accurate samples scales with the dimension of the target distribution. As the authors themselves discuss, their approach is currently limited to approximating the score with a single-hidden-layer MLP, and such Cybenko-Hornik-type approximation results may require immensely wide networks. This is also a limitation of the MNIST experiment: MNIST is intrinsically not that high-dimensional, and already here the authors used a network with 20000 reservoir neurons (Line 638)! 4. I think it's important to mention that the striking result of Figure 2 depends critically on the choice of ReLU activation function, particularly as much of the paper is concerned with existence results. If instead one used $\tanh$, then the performance gap between the trained sampler-only firing rate and reservoir firing rate models should be small. In particular, a sampler-only firing rate network can implement exact Langevin sampling in this case. The score of $p(x) = \frac{1}{2} \mathcal{N}(\mu , \sigma^{2}) + \frac{1}{2} \mathcal{N}(-\mu,\sigma^2)$ is $\frac{d}{dx} \log p(x) = \frac{1}{\sigma^{2}} \left[ - x + \mu \tanh\left(\frac{\mu}{\sigma^{2}} x\right) \right]$. This gives the Langevin dynamics $dx(t) = \frac{d \log p}{dx} dt + \sqrt{2} dB(t) = \frac{1}{\sigma^{2}} \left[ - x + \mu \tanh\left(\frac{\mu}{\sigma^{2}} x\right) \right] dt + \sqrt{2} dB(t)$. With $\mu = 1$ as is used in Figure 2, this is exactly of the form of a single-neuron sampler-only firing rate network in eq (8) of the submitted manuscript, with $D = 1/\sigma^{2}$, $W_{rec} = 1/\sigma^{2}$, $I=0$, and $\phi(x) = \tanh(x)$. One can easily verify numerically that this network produces accurate samples from the desired bimodal distribution. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The figure axis labels are too small, making them hard to read. Figure 4b is also too small. Please increase the font and panel sizes to improve legibility. - Lines 32-33: Along with the other references on Gaussian sampling, Hennequin, Aitchison, and Lengyel, "Fast Sampling-Based Inference in Balanced Neuronal Networks" (NeurIPS 2014), should be cited. - Line 225: I do not see how the proof of Theorem 3 is "constructive," since it is based on an existence-type universal approximation theorem. - Lines 303-304: "This procedure is more aligned with the developmental processes involved in forming visual representations in the infant brain, where visual representations are thought to be noisier (less linearly separable) initially [3]" Please elaborate; I don't quite follow. - Line 317-318: This is quite vague. Having a clearer proposal for what this model might help explain in neuroscience would make for a much stronger paper. - It would be helpful to repeat the theorem statements in the Supplement. - Lemmas C.2 and C.3 can be replaced by using standard results on the asymptotic expansion of the upper incomplete Gamma function for large argument, see https://dlmf.nist.gov/8.11. - In Line 562, in the proof of Theorem 3, should $m$ be replaced by $n$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors provide a generally adequate discussion of the limitations of their work (modulo the weaknesses discussed above), but fail to show the data for some points (see point 3 under Weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s critiques and suggestions. Below we address the reviewer’s concerns point by point: Strengths: With respect to the two enthusiasm-limiting aspects, we believe we understand, and in our revision will add text specifically pointing these limitations out, with citations to the relevant literature (e.g. Echeveste et al. 2020 and allied papers) as well as the explicit suggestions that future work should extend our findings to these settings. In more detail, we will explain: First, it is true that we study the ability to produce samples from given “target” distributions, as opposed to studying (input-dependent) networks that additionally solve an inference problem (as in Echeveste et al. 2020). Second, we study this problem in the context of continuous-valued “rate” networks as opposed to jump-discontinuous spiking systems. This said, we do believe that our results, which are novel in their scope of general distributions and fairly complex technically, are best presented in terms of sampling from fixed, general (beyond-gaussian) target distributions and in terms of the broadly used, and mathematically tractable rate networks. (With respect to a tractable partial extension towards spiking networks, please also see our response to Q3 of Reviewer Y31x.) Weakness: 1. We appreciate the reference to Ma, Chen, and Fox 2015, and in our revised Section 3.1 we will both cite this paper and explain the distinction of our results from their Proposition. Specifically, these authors assume a particular form of the key divergence-free field (G in our paper), and to do so they introduce a second “gamma’’ term to keep G divergence-free (see equation 3 therein). However, with this approach to the gamma term, it is not easy to relate the expressivity of a neural network’s “drift” (i.e., its dynamics, given by decay plus weights times activations) term with the score function (specifically, without knowledge of their Q matrix and its derivative). In section 3.1 and appendix A, we take a novel and distinct approach, based on the Helmholtz-Hodge decomposition, that allows us to directly relate the score and required aspects of the network dynamics. Specifically, Section 3.1 shows that even if the neural dynamics are allowed to be irreversible (G is not 0), the function class of all dynamics still needs to have enough basis functions to approximate the score function part of the drift term that determines the stationary distribution. 2. We thank the reviewer for this critique and their important idea here. This has led us to an explicit new treatment of sampling speed (relaxation) to be included in our revision -- please see the new figure in the .pdf attached to our global reply. We will also add text covering this important topic and the literature the reviewer notes in our revision. In more detail, following the reviewer's critique we have identified a simple way to increase sampling speed without additional optimization steps. This is by letting the divergence-free (DF) field G in our paper be of the form $J\nabla p$ (treat Q as constant in Ma, Chen, and Fox 2015), where $J = -J^T$ can be any skew-symmetric matrix that is either learned through training or prescribed. Then the equation 14 in our paper would be $2\alpha (I + J)^{-1}(W_{out} \phi(\tilde{W}_{rec} x + I) - x) \approx \nabla\log p(x)$. The rest of the training procedure stays the same. Note that we do not need to alter the form of the dynamics in Equation 11. This improvement to the score matching loss gives a significant increase in sampling speed as we show in Figure S2. 3. As requested, we show the preliminary results of our reservoir sampler trained on the high-dimensional latent representations produced by a sparse coding model on MNIST (Fig. S4). Here, the latent space dimension is 3136 (four times overcomplete) and we used 30000 hidden neurons in the sampler. We do note that this result is highly preliminary, and based on our work with the PCA projection we believe that it is possible that future work (on more powerful GPU systems) with larger reservoirs (though, of course, not yet approaching millions of units available in local circuits of the mammalian cortex) and/or other learning rules and hyperparameters are quite likely to succeed. In addition, we argue that MNIST is low-dimensional only if it undergoes a nonlinear transformation. The first 300 PC components from MNIST just explain ~90% of the variance. Using a nonlinear autoencoder, we can indeed learn the distribution with a much smaller number (500) of reservoir neurons; see also Figure S3. 4. We thank the reviewer for this insightful comment: they are absolutely right that a tanh nonlinearity can lead to sampling of bimodal distributions. However, another limitation does arise in this case: since tanh(x) and (x) are both odd functions, any distribution whose score function does not have a related symmetry property will not be able to be approximated (See Figure S1). Questions: - We thank the reviewer for pointing out the missing literature, it will be added in the camera-ready version. - The proof of theorem 3 is constructive in the sense that it constructs the weight of the RSN from a single-hidden-layer MLP. We will add text to the revision to clarify this meaning. - Line 303-304 is referring to the fact that denoising score matching learns from gradually less noisy samples from a target distribution. Here we notice the striking similarity of this procedure to how the infant brain learns less noisy distribution of vision representations over time. - The main application of our work is to mechanistically model neural circuits that produce varied, non-Gaussian prior distributions that support Bayes-optimal behavior (reference 21, 26 in our paper) - With thanks, we will make the implied suggestions in the other careful reviewer questions as well. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thoughtful reply to my questions and those of the other reviewers. I appreciate their addition of tests relating to convergence time; I think this will enhance the final version. Given these additions, I will raise my score. A few small replies to individual points: - Of course the choice of activation function will affect the choice of distributions that can be learned; I do not think the authors need to replace the existing ReLU figure. I think it is sufficient to note that for certain distributions the score can be realized exactly by sampler-only networks. - I still do not follow the relationship with infant learning. This is, however, a minor point, so I do not think further elaboration is necessarily required. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review and feedback. We agree with your suggestion to note the fact that sampler-only networks can sample exactly from some distributions. We will make sure to incorporate these changes in our revision.
Summary: In this paper, the authors address the question of how the dynamics of recurrent neural networks (RNNs) can generate samples from probability distributions of interest. This is of interest to the neuroscience community, since it has been proposed that biological networks can use such sampling-based inference to represent and compute with probability distributions in order to account for their noisy surroundings. The authors prove that standard RNNs have limited expressivity when used directly for sampling-based inference. They proceed to show that RNNs with separate (linear) readouts in contrast have the necessary expressivity to approximate any probability distribution through Langevin sampling. Finally, it is demonstrated empirically that such a 'reservoir' formulation allows RNNs to generate samples from a variety of complex distributions in practice, including multimodal, heavy-tailed, and high-dimensional distributions. Strengths: The paper contains several interesting theoretical results related to the topic of sampling-based inference using recurrent neural networks. This is an important open problem in computational neuroscience, where it remains unknown how the brain performs (approximate) probabilistic computations and represents the necessary probability distributions for such computations. By providing a firmer theoretical grounding for our understanding of sampling-based inference, this work has the potential to inspire both new computational and experiment studies of probabilistic computations in neural circuits. The paper is also generally well written and demonstrates impressive empirical results, although only on relatively 'toy' problems. Weaknesses: Much of the motivation and discussion of the paper is written with reference to the neuroscience literature on sampling-based inference, especially in sensory cortices. However, the empirical results all use artificial toy examples with no obvious relation to questions of interest to the neuroscience community. It would have been interesting to e.g. look at natural images or similar and delve a bit deeper into the dynamics learned by the network to make it more interesting to a neuroscience audience. Many of the figure axis labels and legends are too small to be easily readable, and they would benefit from an increased font size. Technical Quality: 3 good Clarity: 3 good Questions for Authors: L60: The authors state that they derive a 'biologically plausible' learning rule, but the training procedure described in Eq 14 does not seem particularly biologically plausible, requiring both backprop and matrix inverses (as the authors also highlight themselves in L288-291). Perhaps the introduction could be reworded to make this initial statement more in tune with the subsequent methods/results? L73: it would be useful to briefly state what these regularity conditions entail. L114: Helmholtz typo Figure 2 legend: "SO-SC and SO-FR is (_sic_) only able to fit the score function with piecewise linear function (_sic_) when using ReLU transfer function (_sic_)". Why the contrast between 'piecewise linear' in the legend and what looks like globally linear approximations to the score function in the figure? Ref 5 & 37: formatting errors in author name ("GergHo Orban"). I am slightly confused by the authors' network dynamic equations (Eq 7 & Eq 8). When modeling neural firing rates, a positive (e.g. ReLU) nonlinearity is commonly used since firing rates are strictly positive. However, would the Brownian noise term added to the rates in Eq 8 not allow for negative neural firing rates? I was also surprised by the synaptic current equations, since RNN dynamics are usually formulated in terms of either firing rates or membrane potentials. Potentials and currents are of course equivalent for ohmic units, but I was wondering if there was a reason that the authors decided to describe Eq 7 in terms of currents rather than membrane potentials? The authors have discussed the case of a non-linear RNN with a linear readout and shown that it can approximate arbitrary probability distributions. Is the same true for non-linear RNNs with non-linear readouts? and if so, could the reservoir sampler network be considered a special case of a non-linear RNN where only a subset of units are used to represent samples from the target distribution, and the remainder are used for additional computational power/expressivity? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations of their study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s recognition of the significance of our work and the comments and suggestions. Below we address the reviewer’s concerns and questions. Weaknesses: - Regarding the application of our model to natural images: We thank the reviewer for bringing up this point. We agree that training our reservoir sampler on natural image patches would be more relevant for neuroscience applications. We plan to train the model on the CIFAR 10 dataset and evaluate the Fréchet inception distance (FID) of the model in the revised version. - We thank the reviewer for the accurate critique regarding our figure formatting and will fix the axis labels and legends in the revised version. Questions: - L60: The reviewer is absolutely correct that there is still considerable distance between our proposed algorithm and a biological implementation, and our revision will clearly note this. Our meaning here was indeed more narrow than we now see we expressed, in that the learning rule “sidesteps the demands of backpropagation through time (BPTT)”. We will carefully rephrase our usage of “biologically plausible” to make sure this qualification is clear throughout our revision. - L73: please see Cerrai, 2001 Hypothesis 2.1 & 2.2. Roughly this indicates that 1) the drift term b(x) does not increase too fast (slower than exponential rate) as x goes to infinity, 2) the drift term is locally Lipschitz, and 3) the diffusion coefficient is invertible as we assumed in our work. - L114: the typo will be fixed in the camera-ready version. - Figure 2 legend: We appreciate this astute point and the opportunity to clarify here; in addition, our revision will include a revised version of this figure that does not have the confusing linearity of the score function (please see Fig S1). In reply, we first recall that the solutions are found by minimizing the denoising score-matching loss. Because we are using a distribution whose score function is symmetric with respect to the origin, it makes sense for the solution to have the same property. With a piecewise linear activation function, SO-FR and SO-SC have the ability to “bend” the score function at a single point, but due to this symmetry they will not benefit from this in terms of the score matching loss, hence the seemingly “global linear approximation”. We believe that the new Figure S1, which essentially conducts the same experiment using tanh nonlinearity instead of the ReLU function, presents the underlying situation much more clearly, and thank the reviewer for inspiring it. - Ref 5 & 37: will be fixed in the camera-ready version - Noise term can produce negative values within firing rate dynamics: We thank the reviewer for another on-target and careful point. Yes indeed, this is true, and an imprecision (or flaw) that comes along with our basic SDE formulation of the rate-RNN on an unrestricted domain. In our revision, we will carefully note this. We will also state that one quick remedy would be to add a high energy barrier at 0, i.e. add $C \cdot \operatorname{ReLU}(-x)$ to the score function where $C$ is large compared to the noise magnitude so that the dynamics would almost never output negative values. Since this only adds one more basis function, it would not affect our main argument. - Nomenclature of synaptic current dynamics: We absolutely agree that this terminology can be confusing! After surveying a number of sources in the literature without identifying a clear consensus, we simply settled with the present choice as it follows the convention in the seminal text of Abbott & Dayan 2001 “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems” (cf. eq. 7.39). - What about non-linear readouts: If the nonlinear function following the linear layer is invertible (e.g. tanh), then the result still holds, and the density function would be scaled by the determinant of the jacobian. Otherwise, the result may not hold. For example, if we just apply ReLU to the sampler neurons, then we can only obtain positive samples. And yes, if we ignore the transients of the sampler neurons, then we can say that RSN is a special case of a non-linear RNN where only a subset of units are used to represent samples from the target distribution. --- Rebuttal Comment 1.1: Comment: I appreciate the thorough response from the authors to myself and the other reviewers. The additional experiments with tanh nonlinearities and investigations of relaxation time will be particularly useful additions to the paper, and I look forward to seeing the results on the CIFAR 10 dataset when they are finished. I would also consider adding a sentence or two to the paper about the distinction between an RNN with a linear readout vs. an RNN where only a subset of neurons represent the target distribution, since the latter seems more intuitive from a neuroscience point of view. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review and feedback. We agree with your suggestion for additional experiments on tanh nonlinearities, relaxation time, the CIFAR10 dataset, and more explanations of the distinction between RSN and sampling from part of a larger RNN. We will make sure to incorporate these changes in our revision.
Summary: - This study solves an outstanding problem involving arbitrary density sampling using stochastic neural networks. - The authors propose a a Reservoir Sampling architecture, whereby an auxiliary recurrently-connected population facilitates the sampling from a non-trivial distribution. - The work seems solid and elegant (although SDEs are not my domain of expertise), and the numerical experiments are convincing. UPDATE: Sep 1, 2023. I have read the rebuttal, it addressed my Qs and I maintain my score. Strengths: - The math looks solid, although I did not have time to go through the supplement. - There are numerical experiments on complex distributions (MNIST PCs), demonstrating that their procedure can operate on non-trivial densities. Weaknesses: - This work seems to address an _existence_ problem with arbitrary density sampling with RNNs, but how would one validate this kind of model in physiological experiments? - In fact, I would appreciate more discussion on the state of affairs in the neurophysiology of density representation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - This auxiliary RNN population approach reminds me of a recent poster/paper at cosyne by Duong et al. ("Adaptive whitening in neural populations with gain-modulating interneurons"; ICML 2023 https://openreview.net/forum?id=cEWB5hABV5), where they show that if the auxiliary population is greater than a specific size, it can represent and whiten a multivariate gaussian densities exactly. In your framework, is there a relationship between the dimensionality of reservoir and the expressivity of the network in terms of what kinds of densities it can represent? Could this provide testable predictions for an experimentalist to search for? - Suggestion: I did enjoy reading this manuscript, but there was more emphasis on technical parts than necessary to get your point across. If the text were reframed with less emphasis on the math, I think it would flow much nicer and would be much more approachable. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There was adequate discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s critiques and suggestions. Our point-by-point replies follow, together with changes we will make to the revision in light of the reviewer’s points. Weaknesses: - Validation in physiology experiments: This is a great question and one which we now see we should have addressed fully in our submission; our revised manuscript will add to the discussion to improve the paper in this way. In particular, we will describe the following general methodology: A first step in analyzing physiological data in our framework is to identify windows where the distribution of recorded neural activity is roughly stationary over time. The next step is to train different network models using our framework to check if these models are capable of generating samples from the same stationary distribution as measured. Validation would involve not only asking whether this procedure succeeds, but evaluating the size of the “unobserved” reservoir network (i.e., number of hidden neurons) that is necessary, as well as whether different features of single-neuron physiology (modifications to the single-neuron and coupling terms that define the network) and priors on network connectivity lead to the better fitting of the experimental distribution. We note that the outcome here is an answer to the question of what the conditions are on network size, together with the set of weights/other physiological parameters, that can reproduce the distribution of physiological data. We do caution, however, that it will not be a complete answer, as failure to match these data could also arise from limitations in the underlying learning algorithm and/or optimization process. - Regarding the state of affairs in the neurophysiology of density representation: We agree that providing more here would improve our paper, and in our revision, we will add text to the introduction and discussion to relay at least the following: Strong evidence shows that both humans and animals use some form of uncertainty information to guide their behavior; moreover, multiple brain areas have been identified as neural correlates of this uncertainty [2]. Neurophysiology data and analyses that directly verify how neural activity represents probability densities remain in their early stages. However, neural sampling theories, to which our work contributes, have made empirical predictions of certain physiological properties such as noise correlations and Fano factors, and these have been verified in, e.g., [1]. We see our work as opening the door to an allied study of higher-order statistics that more fully describe complete probability distributions. Questions: - The higher the dimensionality is, the closer the stationary distribution of the sampler is to the target distribution because in this case, one has more basis functions at one’s disposal to match the score function. For testable predictions, a hallmark of whether enough neurons are considered is whether the distribution can indeed be captured; if not, our work suggests that might require a reservoir of neurons that supply input signals. Please also see further experimental implications under the response to your first “weakness” point above. - We are glad that the reviewer enjoyed reading our work, and appreciate the technical nature of much of our presentation as well. For the revision, we will go through the paper top to bottom, and identify (1) at least several places where technical details can be moved to the supplement, and (2) clear topic sentences that can be added in less technical language that describe the import of a mathematical argument that follows, with the latter set off by a phrase such as “The technical details of this are as follows,” to accomplish the same objective. [1] Orbán G, Berkes P, Fiser J, Lengyel M. Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex. Neuron. 2016 Oct 19;92(2):530-543. doi: 10.1016/j.neuron.2016.09.038. PMID: 27764674; PMCID: PMC5077700. [2] Rullán Buxó, Camille, and Cristina Savin. "A sampling-based circuit for optimal decision making." Advances in Neural Information Processing Systems 34 (2021): 14163-14175. --- Rebuttal Comment 1.1: Comment: Thanks for your responses to my review. Because SDE's are not my expertise, there is a possibility that I'm missing technical details, so I will keep my score the same. But I will raise my confidence (2 -> 3) to reaffirm that I think the community would find this interesting. I do find the relation to neurophys and testable predictions a little weak, but that seems to just be where the field is at right now. I will reiterate that there is a larger audience that could appreciate this work, but some of the simple concepts are opaquely hidden behind the over-emphasis on technical presentation its present state. So, I hope the reviewers follow through on reformulating the presentation, including thinking about how to reach a broader neuroscience community and not just the neural probabilistic sampling community. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review and feedback. We are in agreement with your suggestion to reformulate the manuscript for a broader neuroscience audience. We will make sure to incorporate these changes in our revision.
Summary: The authors proposed a revervoir-sampler network whose firing rate dynamics can sample from an arbitrary probability distribution. They first established the relationship between the sampling power of the neural dynamics and the ability of the dynamics to approximate the score function. Then they showed that the synaptic current dynamics of the traditional sampler-only networks is only able to approximate score functions that are in a finite-dimensional function space. Importantly they proved that the revervoir-sampler network can sample from an arbitrary probability distribution to arbitrary precision. Finally, they developped a biologically-plausible learning rule for the proposed model. Strengths: This paper is very clearly presented and the scientific question is quite interesting. Weaknesses: Comparison with previous models should be addressed in the experiment part. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Q1: Why did the authors choose to use the reservoir network as the building block? Q2: Following the first question, does the RSN model work because of the linear readout layer, which rescales the distributions? Can the authors explaine more about this? Q3: Can the current model be extended to the multiplicative noise scenario which could be the feature of the poisson spike neurons, or (at the network level) be the feature of the E-I balanced spiking net. Q4: How does the RSN model compare to the coupled attractor model which also implement Langevin sampling [1]? [1]: Zhang, W. H., Lee, T. S., Doiron, B., & Wu, S. (2020). Distributed sampling-based Bayesian inference in coupled neural circuits. bioRxiv, 2020-07. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors addressed some limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the thorough summary of our paper, as well as the comments and suggestions. Weakness: We appreciate the importance of stronger comparisons with previous models, and in our revision, we will significantly expand our treatment of these in the main text. In the current version of the paper, the sampler-only architecture serves the role of previous models, and we discussed how previous works can be seen as special cases of either SO-SC or SO-FR model in Appendix D. In the revised version, we will include and expand the contents of Appendix D in the main text. Questions: Q1: Although the brain exhibits a hierarchical structure (e.g., the visual hierarchy), within each hierarchy neurons are recurrently connected, i.e. they may form a reservoir. This is our motivation for using the reservoir network as a basic building block of our model. Q2: Intuitively, the linear readout layer linearly combines different basis functions, and the reservoir is responsible for supplying a sufficient number of basis functions. Therefore the RSN model works because of 1) a large enough reservoir and 2) the linear readout layer. Q3: Thank you for this great question. Yes, with a slight modification of the Fokker-Planck equation in the paper. If we take the diffusion coefficient to be $\sigma(x) = \sigma \sqrt{\operatorname{diag}(x)}$ and the diffusion matrix $\Sigma = \frac{1}{2} \sigma(x) \sigma(x)^T = \frac{1}{2} \sigma^2 \operatorname{diag}(x) $, it can be shown that the stationary distribution $p$ satisfies $\nabla \cdot(\frac{1}{2} \sigma^2 p + \Sigma \nabla p - p F) = 0$ where $F$ is the drift term of the neural dynamics. Again if we take the divergence field to be 0 and divide both sides by $p$, then the corresponding score matching problem would be $\nabla \log p \approx \Sigma^{-1} (F - \frac{1}{2} \sigma^2)$. Q4: Zhang et al. propose that coupled CANNs can implement Langevin sampling on the space of latent stimulus features. To see the connection between their work and ours, it is helpful to consider the functional form of the Langevin dynamics used by these authors (cf. eq. 4 in their paper). The score function is set directly by means of a formula to obtain the desired target distribution, and since only Gaussian distributions are considered, the stimulus feature needs only be linearly transformed (recall that the score function of Gaussian is linear). Our work, by contrast, (1) treats general target distributions, beyond the Gaussian case, (2) proposes a way to iteratively learn the score function by just looking at the samples from the target distribution, and (3) specifies conditions in which the score functions corresponding to general target distributions can and cannot be successfully learned.
Rebuttal 1: Rebuttal: We thank all reviewers for their expert review and insightful critiques, suggestions, and comments. We have written a detailed, point by point reply to each reviewer's issue or concern in the replies to individual reviewers. In the pdf attached to this general response, we illustrate the results requested by the reviewers. Specifically, we show more results on the double peak experiment using tanh nonlinearity (Figure S1), accelerated sampling with irreversible dynamics (Figure S2), improved MNIST generation using autoencoder (Figure S3), and the preliminary result on the sparse coding setting (Figure S4). Pdf: /pdf/22829f6762e0d7b744e0dfab978a3680718556df.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper investigates architectural requirements for recurrent neural circuits to sample from complex distributions using diffusion models. It presents a model where traditional sampler-only networks are enhanced with additional firing-rate dynamics and a set of separate output units, called reservoir-sampler networks (RSNs). An efficient training procedure based on denoising score matching is proposed. Empirical experiments are presented to demonstrate the model's ability to sample from complex data distributions. Strengths: - The research addresses a significant issue in computational neuroscience and Bayesian learning – how neural dynamics can sample a complex distribution. - Biological plausibility and mathematical computation efficiency are particularly commendable aspects of this work. - The theoretical analysis is well constructed, providing strong support for the model design, including the choice of firing-rate dynamics and the reservoir-sampler network. - The empirical experiments presented provide evidence of the effectiveness of the proposed method. Weaknesses: - The experiments conducted need improvements. Despite using score matching for model training, the quality of the generated images leaves room for improvement. The incorporation of Unet-like or transformer backbones might have enhanced the results. - The paper lacks a clear, diagrammatic representation to explain the utilization of diffusion models in the design of recurrent neural circuits architecture. - The results and conclusions seems to have limited impact to the general AI/ML areas, giving limited experimental results and comparison (e.g. precision and efficiency ) with existing diffusion probabilistic models is somewhat lacking. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: I am satisfied with the discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. Below we address the reviewer’s concerns: Weakness: - Image quality is poor: It is an accurate point that the fidelity of the generated images is substantially lower than that would be realized by, for example, a U-net. However, our goal here is to study probabilistic sampling in the class of RNNs that is broadly used as mechanistic and functional models in neurobiology. Specifically, we ask: 1) whether the RNN that are considered in the current literature can sample from complex distributions, and 2) If not, what form of RNN is able to achieve this? While we believe our paper gives significant answers to both questions, ur, we fully acknowledge the subpar performance of using the RSN to generate complex images in contrast to U-nets. We will acknowledge this explicitly in our revision and point the reader to the discovery of biological components that enable better score matching as an important area of further research. - We thank the reviewer for this important comment. We wish to underscore that our approach, while partially inspired by the diffusion model, also holds differences. While the diffusion model can be conceptualized as a time-inhomogeneous SDE with a finite time horizon, our work delves into a time-homogeneous SDE (the neural dynamics) with an infinite time horizon. The entire training and sampling pipeline is diagrammatically described in Figure 1 and Figure 4a). Recognizing the need to make this more clear, in our revision, we will (1) more clearly refer to this diagram in the main text, and (2) add to the diagram a clear set of pseudocode. We are confident that these two changes will both more clearly explain our methodology and illustrate to the readers the similarities and differences with the classical diffusion approach. - We appreciate that our emphasis is on implications for neuroscience, and as such we have limited implications for the cutting-edge AI/ML area in the short term. However, we do note that reviewers are starting to look at how sampling/diffusion can play a role in RL, robotics, and modeling human behavior [1-4]. Our paper explores the neural underpinnings of those applications and has the potential to inspire more AI/ML work in this regard. [1] Ajay, Anurag, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. “Is Conditional Generative Modeling All You Need for Decision-Making?” arXiv, July 10, 2023. http://arxiv.org/abs/2211.15657. [2] Chi, Cheng, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion.” arXiv, June 1, 2023. http://arxiv.org/abs/2303.04137. [3] Janner, Michael, Yilun Du, Joshua B. Tenenbaum, and Sergey Levine. “Planning with Diffusion for Flexible Behavior Synthesis.” arXiv, December 20, 2022. https://doi.org/10.48550/arXiv.2205.09991. [4] Pearce, Tim, Tabish Rashid, Anssi Kanervisto, Dave Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcarcel Macua, et al. “Imitating Human Behaviour with Diffusion Models.” arXiv, March 3, 2023. http://arxiv.org/abs/2301.10677. --- Rebuttal Comment 1.1: Title: Thanks Comment: The response addresses most of my concerns, therefore I decide to keep my original ratings. --- Reply to Comment 1.1.1: Title: Thank you Comment: We thank the reviewer for the review and feedback again.
null
null
null
null
null
null
Fine-Grained Cross-View Geo-Localization Using a Correlation-Aware Homography Estimator
Accept (poster)
Summary: This paper propose to wrap the ground-view image to align with the corresponding aerial-view image using homography estimation. Firstly, a differentiable spherical transform is adopted to align the perspective between the ground-view and aerial-view images. Then a correlation-based homography estimator is proposed to align the similar parts of the transformed ground image and aerial-view image. It achieves state-of-the-art performance on VIGOR and KITTI datasets with a speed of 28 FPS. Strengths: + The idea is very interesting and the writing is good. + The presentation of figures and tables are also good. + The evaluation is comprehensive and the performance is promising. Weaknesses: - The figure looks good, but I am not able to get the detailed flow of data. The part 2 and part 3 of Fig 2 are connected only on the correlation map. Would the code be released? It would be better to add this claim to help reproduce the result. - The authors could add some insight about why the spherical transform is better than polar transform/ projective transform in the other papers. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See limitations and weaknesses. I am open to raise my rating after rebuttal. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - The spherical transform is applied on query imaged on-the-fly, which might be a disadvantage compared to other transforms on aerial-view images, but the method is still fast. Is there a speed comparison with other methods? I think this could a limitation that needs to be discussed. - The authors could add more discussion on positive/negative societal impact in supplementary materials. I think this method should have some potential impact on industry. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The detailed flow of data between part 2 and part 3 of Fig 2 and code.** **A1:** Thank you for the thoughtful question. The mentioned "part 2" refers to the computation of the homography matrix between the BEV image and its corresponding satellite image, denoted as $H^k$ in the diagram. The resulting matrix is subsequently passed to "part 3" for the computation of the loss. In "part 3," the final homography matrix obtained from "part 2" is denoted as $H^{itr}$. We appreciate your interest. We will release our codes. **Q2: Why the spherical transform is better than the polar transform/ projective transform.** **A2**: Thank you for your constructive suggestion. Firstly, our spherical transform projects the ground image to a bird's eye view, while polar transform or projective transform projects the satellite image to the perspective of the ground image (such as aligning with a panoramic view). There is a growing recognition and research on the role of Bird's Eye View (BEV) representation in localization, navigation, and related tasks. The advantage of projecting to a bird's eye view, as compared to projecting the satellite image to the perspective of the ground image, lies in its more intuitive nature for the localization task and the need for only one projection at the start of the localization process. On the other hand, the latter approach requires selecting projection points (initial pose or candidate poses) for projecting the satellite image, and if these points are not accurately chosen, the resulting projection may not align well with the ground image. Additionally, for more accurate localization, the latter approach requires multiple applications of the projection method from different positions, which could potentially make it less real-time compared to our method. Furthermore, in contrast to the polar transform methods, our spherical transform is derived from the imaging model of the ground camera, adhering closely to geometric principles, while the polar transform is considered a rough geometric alignment. In comparison to traditional Inverse Perspective Mapping (IPM) methods, our approach excels by not requiring any camera intrinsic or extrinsic parameters, thereby offering strong generality and versatility. It is precisely this advantage that allows us to project the panoramic images from the VIGOR dataset, which lacks camera calibration parameters, to a bird's-eye view perspective. **Limit.1: The speed comparison between spherical transform and other methods.** **A3**: We have evaluated the runtime of our spherical transform module. Our spherical transform module operates efficiently. Whether applied to the panoramic images in VIGOR or the frontal images in KITTI, the pixel correspondences between ground-level images and the bird's-eye-view can be precomputed during initialization. Subsequently, by employing `torch.nn.functional.grid_sample`, obtaining the corresponding BEV image takes under 1 ms for each processing instance. Furthermore, pixel projection calculations are matrix-based and GPU-accelerated using PyTorch, requiring less than 10ms to complete. After the acceptance of our paper, we plan to release our code as open source. Additionally, we will provide a Colab link for those who are interested in exploring and evaluating our spherical transform module. **Limit.2: Potential impact on industry.** **A4**: Thank you for your constructive suggestion. We will add potential impact on the industry in supplementary materials. Here we briefly discuss the positive impact of our method. We believe that our approach holds significant promise for a range of sectors, particularly in domains closely tied to autonomous driving, navigation, and geospatial technology. By introducing a novel pipeline, our method has the potential to redefine workflows within these industries, opening up exciting avenues for exploration and innovation. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed my concerns and I will keep my rating. I have checked the other reviews, but I do not find any ground for rejection. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Ak83, Thank you very much for your insightful review and your decision to maintain the rating of acceptance! We are pleased to hear that our rebuttal has addressed your concerns. Should there be any further questions, please don't hesitate to let us know. Best regards, The Authors.
Summary: This paper addresses the problem of ground camera pose refinement by ground-to-satellite image matching. For this purpose, this paper proposes to project a ground image to the overhead view image plane by using a Homography, and then iteratively update the residual Homography between the projected overhead view image and the reference satellite image, from which the relative pose between the ground camera and the satellite image can be estimated. Experiments are conducted on two cross-view datasets, VIGOR and KITTI, and the results demonstrate the effectiveness of the proposed method. Strengths: + A new pipeline for ground-to-satellite image localization using Homography update is proposed; + The proposed method achieves state-of-the-art performance. Weaknesses: (1) This paper estimates the residual Homography transformation between the reference satellite image and an overhead view image projected from a ground view image. The ground camera's pose is estimated from the Homography. However, no illustrations are about how to estimate the camera pose (rotation and translation) from the Homography. One sentence in L205-206 is for orientation estimation. However, it seems that we should select different points from the ground view image and project them to the overhead view by using the estimated Homography, and the orientation is estimated by connecting the two points. If this is the way to estimate orientation, my first question is: "how to estimate the translation"? This is not described. My second question is, how to select the two points, randomly? Will this be reliable? The severe occlusion between the ground and satellite images makes that not all ground image pixels are visible in the overhead view and vice versa. How to select/determine points that are co-visible by the two views for this orientation estimation? (2) If the pose is derived from the estimated Homography, it should be a deterministic solution. How the probability map in Fig.3 is estimated? (3) The previous work by Shi and Li [22] proposes an iterative optimization network that directly optimizes pose parameters. From a high-level idea, the difference between this paper and Shi and Li is that this paper optimizes Homography parameters. However, the Homography contains 5 degrees of freedom which are larger than that of the pose parameters (3 DoF). What is the superiority of this? What is the performance of the proposed method if we simply modify the Homography parameter output to pose parameter output? and similarly, since this paper uses optical flow for supervision, why not directly estimate the optical flow between the two images and then compute the relative pose from the flow map? (4) This paper claims that the GPS labels of the VIGOR and KITTI datasets are inaccurate and thus proposes to re-calibrate the pose labels. However, if the GPS labels themselves are inaccurate, the different GPS-to-UTM conversion methods would not correct the error. Moreover, any investigations of what kind of GPS conversion methods are used in previous works when introducing the cross-view dataset, don't they also use the Mercator projection or an equivalent projection, or why the Mercator projection used in this paper should provide more accurate UTM coordinates than the methods used in their original papers? Furthermore, if the ground truth has been modified, it is unfair to compare previous works' results reported in their original papers with the inaccurate poses for training and evaluation. (5) The network is supervised using ground truth (GT) pixel correspondences. How are the GT correspondences derived, according to the GT relative RT? However, as I mentioned before, due to the severe occlusions between ground and satellite images, not every pixel in the ground view is visible in the overhead view and vice versa. Thus, there must be incorrectly derived GT correspondences if all the pixels in the overhead/satellite images are used. (6) For the ground-to-satellite projection, this paper assumes the satellite image has a pin-hole camera projection with an FoV of 85-degree. However, Shi et al. [26] approximate a parallel projection for the satellite images. What is the superiority of this pin-hole camera projection over the parallel projection, and why use an FoV of 85 degrees instead of other numbers? (7) The writing needs to be thoroughly improved and should be more accurate. For example: (i) Plz do not simply use a number [*] for references as a component in a sentence without mentioning the author's names. (ii) Sentence in L33 is not strictly correct. Slicematch is also a kind of "repeat (pose) sampling". (iii) Sentence in L144 "without any intrinsic parameters" is very ambiguous, this paper does use focal length and FoV for satellite images (L136-137), and the ground-to-satellite tomography-based projection should also use the ground camera's intrinsic when it is a pin-hole camera as in the KITTI dataset. (iv) The symbols in Sec 3.2 are very confusing. The authors mixed symbols for three images: the original ground image, the projected overhead view image from the ground-view image, the satellite image. This section needs to be thoroughly revised. (v) ... (8) Missing important references: (i) Fervers, Florian, et al. "Uncertainty-aware Vision-based Metric Cross-view Geolocalization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. (available on arxiv in 2022, before SliceMatch [11]) (ii) Shi, Yujiao, et al. "Where am i looking at? joint location and orientation estimation by cross-view matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. (iii) Shi, Yujiao, et al. "CVLNet: Cross-view Semantic Correspondence Learning for Video-Based Camera Localization." Asian Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. (iv) Vyas, Shruti, Chen Chen, and Mubarak Shah. "Gama: Cross-view video geo-localization." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. (v) Zhang, Xiaohan, Waqas Sultani, and Safwan Wshah. "Cross-View Image Sequence Geo-localization." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Plz refer to my comments on Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Using homography estimation between the ground and satellite images ignores the correspondences of objects not in the tomography plane, potentially limiting the performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed reviews. Before addressing each individual question, we want to briefly introduce the pipeline of our proposed method to better clarify the technical details of our method. 1. We use a spherical transform to project ground images onto a bird's-eye view (BEV). Subsequent operations are on the BEV image and the satellite image only. The projection effect is shown in Figure 1(a)(b). 2. The Homography Estimator yields a matrix. With this matrix, we align the BEV and satellite images(Figure 1(b)(c)), creating overlap (Figure 1(d)). 3. **(3.1) Translation:** After aligning the two images, we can project the center point of the BEV image onto the satellite image (L189), derive its pixel coordinates, and obtain GPS via the inverse of Equation 4. The location of the BEV center point corresponds to the camera position. **(3.2) Orientation:** We project both the center point of the BEV image and another point along its vertical centerline onto the satellite image. Connecting these projected points establishes the camera's orientation. This process is analogous to determining a direction on a map by connecting two points: your current location and a point directly ahead on your path. 4. Training labels: Our method, like others, avoids direct GPS training. Instead, relies on pixel coordinate labels. Prior methods used meter-to-pixel resolutions, posing city-specific challenges. In contrast, our Mercator-based approach is universally applicable and accurate across cities. **Q1.1: How we estimate the camera pose (T, R) through Homography.** **A1.1:** See 3.1, 3.2 of the above pipeline. **Q1.2: "how to select the two points ... reliable? ... How ... co-visible by the two views ..."** **A1.2:** See 3.2 above for selecting points. Correlation-aware mechanism aligns correlated portions of BEV and satellite images, reducing non-visible area impact(L53-57). Co-visibility between points isn't strictly required for translation/orientation estimation via homography. So, there should not be a concern about the reliability of selected points. In Fig 1(d), the vehicle in BEV isn't visible in the corresponding satellite image. Using homography from visible parts alignment, we can align the vehicle's position on the satellite image and get its GPS. **Q2: How the probability map in Fig.3 is estimated?** **A2**: The Homography-derived pose is deterministically derived. The probability map doesn't affect pose estimation. Map gauges localization confidence, crucial for navigation tasks. It's a correlation map between the BEV center and satellite points, an intermediate in our network. Part 2 of Figure 2 illustrates map generation. **Q3: Why deriving Homography instead of direct pose? "Why not directly estimate the optical flow."** **A3**: Our network does not use optical flow supervision. We use a pair of pixel coordinates obtained from the localization point. Homography aligns images with geometry, uniquely solving pose. There is no need to learn Homography-to-pose transition. Our method can simplify networks and yield analytical solutions. Shi and Li's [22] needs camera parameters (Equation 3) for projection and invokes the Geometry Projection model iteratively. Our method doesn't need parameters and repetitive projection. Our data lacks optical flow supervision and optical flow estimation is inconsequential as we only use two points for translation and orientation. **Q4: "Claims that the GPS labels ... inaccurate" and "... provide more accurate UTM coordinate ..."** **A4:** We **do not** consider GPS labels problematic. Direct use of GPS info in training is challenging, necessitating pixel labels. VIGOR used uniform meter-to-pixel conversion across cities to calculate pixel-level labels from GPS, leading to significant errors(Sup. L73-75). SliceMatch highlights this, improving by calculating city-specific resolutions but still lacking precision. They mention in Sup.: "VIGOR dataset have used a ground resolution equal to 0.114m/pixel for all 4 cities" "We have calculated a new ground resolution for each city by averaging the ground resolutions." VIGOR and KITTI satellite images are from Google Maps, which uses Mercator. So Mercator is more accurate. SliceMatch fixed VIGOR labels, CCVPE uses them. Table 5 in our Sup. shows minor differences, averaging <0.2m vs. ours. Ours is universal and convenient. **Q5: "The network ... pixel correspondences. ... are used."** **A5**: For network supervision, we use one-pixel correspondence, not all image pixels. BEV center corresponds to camera GPS; Mercator gives pixel label from GPS. **Q6: "Assumes the satellite image has a pin-hole camera projection with an FoV of 85-degree."** **A6:** Our paper **do not** assume satellite imaging. We use the ground camera's imaging model for generating BEV. Our projection doesn't need repeated sampling for pose estimation like Shi et al.'s [26]. The "FoV of 85 degrees" parameter affects BEV image FoV, as shown in Fig 7. We chose 85 degrees to achieve the optimal alignment with satellite map FoV (Sup. L34-37). **Q7: Writing.** **A7:** Thank you for your suggestion, but there are some misunderstandings as listed below. (i) Thank you. We will revise our paper. (ii) Our 'repeat sampling' means involving full process per pose, SliceMatch includes all candidate poses initially. So it is not "repeat sampling." (iii) No calibrated parameters are needed. See A6 and Sup. "A Projection Details." (iv) Sec 3.2 only alligns BEV and satellite images(subscripts 'g' and 's'). Mixed symbols are misunderstood. We didn't use the original ground image in sec 3.2. **Q8: References.** **A8:** We will add these papers in the revised paper. **Limit.1: "Using homography ..."** **A9:** Our method handles discernible contours, even those not co-planar. Co-visible regions yield good outcomes. Experimental results show lower errors, confirming robustness and efficacy. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. The response clears some of my concerns, while some still exist. > (3.2) Orientation: We project both the center point of the BEV image and another point along its vertical centerline onto the satellite image. Connecting these projected points establishes the camera's orientation. While this is reasonable, isn't there a closed-form solution for estimating rotation and translation from the homography matrix (e.g., by SVD decomposition [\*])? [\*] Malis, Ezio, and Manuel Vargas Villanueva. "Deeper understanding of the homography decomposition for vision-based control." (2007). What is the difference? > Q3: Why deriving Homography instead of direct pose? I am not convinced why not use the GRU to update the inplane relative rotation and translation (maybe also scale) between the BEV image and the satellite image. The in-plane relative rotation & translation & scale is equivalent to the Homography but with less degrees-of-freedom (DoF). Thus, they should be easier to learn compared to Homography. This relative rotation and translation is also the relative rotation and translation between the ground and the satellite image. > Q4: "Claims that the GPS labels ... inaccurate" and "... provide more accurate UTM coordinate ..." I can understand the label correction for the VIGOR dataset. It is based on the assumption that the per-pixel distance (ground resolution) of satellite images for different region should not be constant. However, from my understanding, as long as the area is within the same UTM zone, the ground resolution is roughly the same. Thus, it is not clear how this correction will affect the performance. Furthermore, as in the initial comments, if the pose label in the original dataset is inaccurate, it is unfair to compare previous works' results reported in their original papers with the incorrect poses for training and evaluation. At least the authors should select the most state-of-the-art and re-train & evaluate it with corrected labels by this paper. There is also a label creation illustration for the KITTI dataset in the supplementary material. Although the steps sound reasonable, my confusion is: doesn't the original cross-view KITTI dataset provide GPS labels for ground and satellite images? Otherwise, how did previous works train and evaluation on this dataset? > Q6: "Assumes the satellite image has a pin-hole camera projection with an FoV of 85-degree." >A6: Our paper do not assume satellite imaging. We use the ground camera's imaging model for generating BEV. Our projection doesn't need repeated sampling for pose estimation like Shi et al.'s [26]. >The "FoV of 85 degrees" parameter affects BEV image FoV, as shown in Fig 7. We chose 85 degrees to achieve the optimal alignment with satellite map FoV (Sup. L34-37). The BEV image assumes pin-hole camera projection with an 85-degree FoV. However, if the scene is not a planar and the BEV image projection differs from the satellite image projection, the relative transformation between the BEV image and the satellite image cannot be assumed as a homography. The satellite image is often approximated as an orthogonal projection. Thus, a straightforward understanding would be that the BEV image should also be approximated as an orthogonal projection. What is the problem with this? What is the superiority of the pinhole camera projection assumption over this orthogonal projection? For now, I keep my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for your further inquiries. We are committed to addressing all the concerns you have raised. **Q1: Our method Vs SVD decomposition** **A1:** - SVD decomposition, as used to obtain camera pose (R, t), requires intrinsic camera parameters, per Eq (1) and (2) in Section 2.2 of the reference. Yet, these parameters are not available for our BEV and satellite images. - SVD decomposition of the homography matrix provides pose (R, t) between cameras, yet the lack of depth information results in scale ambiguity, limiting accurate ground camera position estimation. - Our approach is intuitive and efficient, offering a superior alternative to the complex and potentially time-consuming SVD decomposition. **Q3: Less DoF Vs Homography** **A3:** The transformation between BEV and satellite images encompasses more than 3 DoF. In practice, there may be potential roll and pitch deviations in addition to translation and yaw. The homography matrix accommodates these additional degrees of freedom in perspective transformation, rendering it a more realistic and robust choice for our scenario. Our homography estimation aligns correlated portions of BEV and satellite images. Co-visibility between points isn't strictly required for translation/orientation estimation via homography. This characteristic enhances the robustness of our method. For instance, in Fig 1(d), the vehicle in BEV isn't visible in the corresponding satellite image. Using homography, we can **align the vehicle's position on the satellite image and get its GPS.** This capability also extends the method's applicability. **Q4: VIGOR Label Correction** **A4:** The per-pixel distance of satellite images varies by region, evident from **SliceMatch's Supp.'s Table 4** and our Supp.'s Table 5. The labels we computed are very close to the corrected labels from SliceMatch and CCVPE (average error < 0.16m). We conducted experiments using their corrected labels for training, and the results showed that our label correction **did not significantly affect model performance** (mean dist errors on val data after augmentation were 3.55428m and 3.55148m, respectively). However, in order to **ensure the rigor of research and facilitate future research**, we still revised the data. Our method's merit lies in its **generality and convenience**. Unlike SliceMatch and CCVPE, our approach does not require city-specific resolutions. The following is the code of CCVPE: if city[batch_idx] == 'NewYork': meter_distance = pixel_distance * 0.113248 / 512 * 640 elif city[batch_idx] == 'Seattle': meter_distance = pixel_distance * 0.100817 / 512 * 640 However, we appreciate your suggestion and have conducted experiments by retraining and evaluating the most SOTA model (CCVPE) using our corrected labels. The results demonstrate that our labels do not noticeably impact model performance. **Q5: Train label creation for the KITTI** **A5:** The original cross-view KITTI dataset provides GPS labels for both ground and satellite images. In the absence of noise, the center pixel coordinates of satellite images serve as the ground truth label for the position of ground camera. Previous methods randomly displaced and rotated satellite images to generate pose labels. We followed a similar procedure during training and evaluation. Additionally, our model requires a pair of matched ground and satellite image pixel points. The GPS of the ground image in KITTI refers to the camera's optical center, which is not visible in its front-view image. Therefore, based on the camera's calibration parameters and point cloud data, we calculated the GPS coordinates corresponding to a pixel point in the BEV image. Then, using the Mercator method, we calculated the pixel coordinates of the GPS on the satellite map. **Q6: Pinhole camera projection for ground camera & orthogonal projection for satellite image** **A6:** Our goal is to estimate the corresponding pixel points of each pixel on the BEV imaging plane based on the ground camera's imaging model. Thus, we need to choose a method that is **more in line with the ground camera imaging model**, namely the pinhole imaging model. **In Shi et al., "Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image Matching" [26], Fig 5** illustrates both ground camera and satellite image imaging models. The illustration reveals that the ground camera's BEV obtained through the pinhole imaging model and the satellite image derived via orthogonal projection genuinely share the same viewpoint. Both satellite and our BEV images **project real-world points to a BEV imaging plane**. Building upon this imaging approach, imagine a concave semicircular surface on the ground; even though it is not a planar, BEV and satellite images can still be aligned through homography. Furthermore, our method has achieved SOTA mean localization accuracy, which demonstrates its robustness. Please inform me if you have any additional concerns. --- Reply to Comment 1.1.2: Comment: Dear Reviewer NjYP, We greatly appreciate your previous inquiries. As the discussion phase is nearing its conclusion, we would like to confirm whether we have adequately addressed your questions. Should you have any further inquiries or require additional clarification, please do not hesitate to inform us. Best regards, The Authors
Summary: The paper addresses fine-grained cross-view geo-localization task that that matches the camera ground images with a satellite image patch covering the same area to determine the geo-pose of camera. The proposed approach projects ground images onto a bird’s-eye view perspective and formulate the task as a 2D image alignment problem. A correlation-based homography estimation module is proposed to achieve precise localization. Experiments are presented on VIGOR and KITTI datasets. Strengths: + Proposed correlation-aware homography estimation module is interesting + Experiments show promising results in VIGOR and KITTI datasets. Weaknesses: The paper has interesting ideas, but I think the paper needs more work to be convincing. The weaknesses are mentioned below: -- The use of BEV representation has been explored for  Fine-Grained Cross-View Geo-Localization task [Ref1]. This paper should have been discussed and compared in the experiments. [Ref1] F. Fervers, et. al., Uncertainty-aware Vision-based Metric Cross-view Geolocalization, CVPR 2023 -- The novelty behind the spherical transform module to project ground images to a birds-eye-view (BEV) perspective is not clear. Inverse Perspective Mapping (IPM) is commonly used for long time for transforming camera images to BEV [Ref2, Ref3]. Also, looking at the generated BEV images and from experience of using [Ref3] for generation, the BEV images do not seem any better than the quality achieved by applying the IPM technique. [Ref2] H. A. Mallot, et. al., “Inverse perspective mapping simplifies optical flow computation and obstacle detection,” Biological Cybernetics, 1991 [Ref3] L. Reiher, et. al., "A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird’s Eye View", IEEE International Conference on Intelligent Transportation Systems, 2020 -- There are also many works that use neural networks to generate BEV representation from ground camera images. Some of these papers should be discussed. (e.g., [Ref4, Ref5]) [Ref4] Z. Li, et. al., Translating Images into Maps, ICRA 2022 [Ref5] A. Saha, et. al., BEVFormer: Learning Bird’s-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers, ECCV 2022 -- The utilization of infoNCE loss for cross-view image geo-localization is also not new [Ref6, Ref7] Y. Zhu, et. al., Simple, Effective and General: A New Backbone for Cross-view Image Geo-localization, arXiv 2023 F. Deuser, et. al., Sample4Geo: Hard Negative Sampling For Cross-View Geo-Localisation, arXiv 2023 -- Experiments do not show consistent improvements, as evident from Table 2. There are several cases where other baselines perform better. -- Not sure of the reproducibility of this work and no code is provided in the supplementary. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses. I have read the author’s rebuttal and other reviews. I am still not convinced and keep the score the same. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: "The use of BEV representation has been explored ..."** **A1:** Sorry for the confusion. We'd like to clarify a point. The **BEV (Bird's Eye View) images** used in our paper are explicitly derived by exploiting the geometry of the scene. In contrast, the references [Ref1] and others utilize networks to learn how to project other-view image inputs to BEV representations at the **feature level**. It's important to note that none of these references employ explicit BEV images resembling Fig. 1(b) presented in our paper. While some research focuses on learning BEV representations within the BEV space, we explore the use of BEV images for Fine-Grained Cross-View Geo-Localization. In the Related Work of [Ref 1], it is explicitly mentioned that "PV2BEV (the perspective view to bird's eye view transformation) methods can be categorized based on whether they explicitly exploit the geometry of the scene to bridge the gap between PV and BEV or learn the mapping in a data-centric manner." They follow the second approach, using a spatiotemporal transformer encoder for multi-camera/timestamp to BEV mapping. Our method uses the first approach. Our proposed pipeline specifically requires BEV images for direct alignment with Aerial imagery, rather than employing a high-level BEV representation (which exists in the feature space). As evident from Figure 1 we provided, our BEV method intuitively enhances information alignment with aerial imagery, reducing network complexity. We plan to expand the discussion on various BEV presentation methods in the related work section. However, direct comparisons may not be feasible due to fundamental differences in our BEV approach compared to [Ref 1]. Furthermore, [Ref 1] uses datasets(KITTI-360) with multiple vehicle cameras for joint localization involving temporal data. In contrast, our method directly localizes using single panoramic(VIGOR) or frontal-view images(KITTI) aligned with satellite maps. **Q2: The novelty and advantage of our spherical transform compared with IPM.** **A2**: Sorry for the confusion. Our spherical transform achieves high-quality BEV images from ground images comparable to IPM without the need for the camera's intrinsic and extrinsic parameters, detailed in the 'A Projection Details' section of Supplementary Materials. As mentioned in [Ref1]'s related work, "IPM transforms PV features to BEV via a homography based on the camera’s intrinsic and extrinsic parameters." The IPM method requires knowledge of the camera's intrinsic and extrinsic parameters, which are not available for the VIGOR dataset. Our proposed spherical transform module can project ground images to a BEV perspective without the need for any camera’s intrinsic and extrinsic parameters. In conventional IPM, distinct parameters are required for calculating BEV images in distinct cities, tailored to each camera setup. In contrast, our spherical transform enables a unified approach and interface for projecting BEV images. Moreover, our spherical transform operates efficiently. The pixel correspondences between ground-level images and BEV can be precomputed during initialization. Obtaining the BEV image takes under 1 ms for each processing instance. Moreover, pixel projections are matrix-based and GPU-accelerated in PyTorch, completing in <10ms. **Q3: "There are also many works that use neural networks to generate BEV representation ..."** **A3**: Please refer to Q1 for the distinction between BEV representation and the BEV image derived from our method. We will add discussions of the differences between BEV representation and our method in our revised paper. Also, there is a significant distinction between our explicit method for obtaining BEV images and the BEV representation approaches[Ref1, Ref4, Ref5]. The significance of our method lies in its ability to operate without the need for camera intrinsic and extrinsic parameters, coupled with its rapid processing speed. This allows for seamless integration into applications employing BEV representation methods. **Q4: "The utilization of infoNCE loss for cross-view image geo-localization is also not new."** **A4:** The utilization of infoNCE loss is not the main contribution of our work. Despite InfoNCE loss having been previously explored ([Ref6, Ref7]), we share different motivations. We highlight its role in homography estimation given limited point supervision. It maximizes label utilization during training. As stated in our introduction, homography estimation typically requires a minimum of four corresponding point pairs for accurate computation. However, the majority of datasets we work with only provide a single pair of points as supervision (i.e., the localization position). Furthermore, in Section 4.5, we have also indicated that our model training can be accomplished even without this loss, yet still yield competitively robust outcomes (Table 3). **Q5: "Experiments do not show consistent improvements, as evident from Table 2."** **A5**: The primary discrepancy in performance in the original paper lies in the orientation estimation. Experimental results demonstrate our model's capability to estimate orientation even without the requirement for orientation labels during training. The introduction of orientation supervision can lead to a significant improvement in the performance of orientation estimation. We include an updated Table 2 after incorporating orientation supervision. Please refer to the global response and Table 1 in the uploaded PDF for more details. **Q6: Reproducibility of our work.** **A6**: We will release our code, model checkpoints, and training scripts. --- Rebuttal Comment 1.1: Title: Comment by Reviewer ZVTo Comment: Thanks for the Rebuttal. However, I still hold my original opinion. I think detailed analysis and comparison with other geometry-based and learning-based BEV methods are critical, which is missing. About IPM compared to the proposed spherical transform approach, I still do not think there is a significant enough difference. IPM can utilize the actual camera's intrinsic and extrinsic parameters. However, IPM will also be able to utilize image height width and 85-degree field-of-view (as used in the paper) to estimate an intrinsic. The extrinsic used are relative and help IPM more when the BEV is generated using multiple images with displacement from the center. --- Reply to Comment 1.1.1: Comment: **Q1: 'I think detailed analysis and comparison with other ... BEV methods are critical.'** **A1:** Thank you for suggesting a comparison between our method and other geometry-based and learning-based BEV methods. While we appreciate the suggestion to conduct a comparative experiment with [Ref1], it's important to note that [Ref1] chose to create a new dataset for their study, **instead of utilizing the commonly used VIGOR or KITTI datasets** in the Fine-Grained Cross-View Geo-Localization field. This decision is likely due to two main factors: **1.** The need for multi-view ground cameras: - "The ground image of the **i-th** camera is encoded into feature map F_{Gi}." **2.** The need to use true camera intrinsic parameters for memory and computational optimization. VIGOR's panoramic images lack this requirement: - "The points are ... projected onto the camera plane **using its extrinsic and intrinsic parameters**." However, we find another recently accepted learning-based BEV method [Ref 8] from papers that cite [Ref1]. **This article is new**, posted on arXiv on July 16, 2023, and **uses the same VIGOR and KITTI datasets as us**: - [Ref 8] Yujiao Shi et al. "Boosting 3-DoF Ground-to-Satellite Camera Localization Accuracy via Geometry-Guided Cross-View Transformer" This paper explicitly describes the use of a transformer method to learn image representations in the BEV space. According to the experimental results provided in [Ref 8], our method **exhibits significant advantages in comparison.** Please refer to Table 1 and Table 2 below for specific comparison results: Table 1 (Comparison results on **VIGOR** with aligned orientation): Method | Area | ↓Mean(m) | ↓Median(m) | :------ | :----- | :----------- | :----------- | [Ref8] | Same | 4.12 | 1.34 | Ours | Same | **3.36(↓ 0.76)** | 1.36(↑ 0.02) | [Ref8] | Cross | 5.16 | 1.40 | Ours | Cross | **3.96(↓ 1.2)** | 1.68(↑ 0.28) | Table 2 (Performance comparison on **KITTI** with 20${}^{\circ}$ orientation noise, where Test 1 corresponds to the same area, and Test 2 corresponds to the cross area): Method | Area | ↑Lateral R@1m (%) | ↑Lateral R@5m (%)| ↑Long. R@1m (%) | ↑Long. R@5m (%) | :------ | :----- | :------------- | :------------ | :----------------- | :----------------- | [Ref8] | Test 1 | 76.44 | 98.89 | 23.54 | 62.18 | Ours | Test 1 | **98.09(↑ 21.65)** | **100.0(↑ 1.11)** | **89.37(↑ 65.83)** | **99.31(↑ 37.13)** | [Ref8] | Test 2 | 57.72 | 91.16 | 14.15 | 45.00 | Ours | Test 2 | **65.36(↑7.64)** | **96.02(↑4.86)** | **54.33(↑ 48.18)** | **80.32(↑ 35.32)** | Furthermore, we would like to emphasize our method's motivation and the distinction between our approach and other BEV representation methods. - Our method aims to obtain 3-DoF poses of ground cameras by **aligning ground BEV images** with satellite images from the same viewpoint. - So, what our method requires is an **explicitly obtained BEV image** - In contrast, methods such as [Ref1]: - first employ an encoder to generate **feature maps** of ground images - and then use transformers to construct the BEV representation based on these feature maps. - using the ground image features as keys and values. - The outcome of these methods is a feature map. **Q2: 'IPM will also be able to ... estimate an intrinsic. The extrinsic ... is generated using multiple images with displacement from the center.'** **A2:** Our spherical transform approach is motivated by the need to convert **panoramic images** into BEV images. To the best of our knowledge, no readily available IPM method fits our panoramic data. Thus, we derived and successfully implemented this method ourselves. Traditional IPM methods involve three coordinate systems: image's, camera's, and world's. The camera's unit of **measurement is pixels**, while camera's and world's use **meters**. Traditional IPM computes spatial coordinates in the world coords on the ground corresponding to image pixel coordinates, then obtaining BEV image. Our method directly computes **between the original image and the BEV image**, eliminating the need for coordinates in meters. This simplifies the process to a certain extent. Extrinsic parameters in IPM are not just for multiple images. The derivation of IPM for individual frames also requires the **camera's height above the ground as an extrinsic parameter** which is not available in VIGOR. For example, in MATLAB IPM routine(https://ww2.mathworks.cn/help/driving/ref/birdseyeview.html), they explicitly state the need for this parameter, stating: "Set the height of the camera to be about 2 meters above the ground." Similarly, in Bertozzi et al.'s 'Stereo Inverse Perspective Mapping: Theory and Applications,' Eq. (1), 'h' is the true camera height above the ground. Please let us know if you have further concerns. --- Reply to Comment 1.1.2: Comment: Dear Reviewer ZVTo, We greatly appreciate your previous inquiries. As the discussion phase is nearing its conclusion, we would like to confirm whether we have adequately addressed your questions. Should you have any further inquiries or require additional clarification, please do not hesitate to inform us. Best regards, The Authors
Summary: The paper proposes a homography estimation-based method for cross-view geo-localization. Contrary to existing methods that tackle the problem as a retrieval problem, the paper proposes to reformulate the problem as homography estimation of aligning the birds-eye-view against the satellite image. To this end, the authors propose a recurrent homography estimation module which learns to estimate the homography matrix given the feature maps of birds-eye-view and satellite images. Experiments show that the method can demonstrate effective performance against the tested baselines in most metrics. Strengths: 1. The idea of using bird's eye view for matching against satellite images is intuitive, and the proposed method well leverages the formulation by casting the geo-localization problem as homography estimation. 2. Experiments make valid comparisons against both existing geo-localization methods and also on a few image matching-based methods, where the proposed method shows performance enhancements. Notably, the paper shows large amounts of improvements in translation estimation, which I find to be highly practical for geo-localization tasks. 3. Writing is very clear. It was straightforward to follow the paper's motivation and how the proposed method functions as a whole. Weaknesses: My concerns are two-fold: 1. While I favor the paper's formulation of aligning bird's eye view against satellite images, I feel the paper has only included a limited number of homography-based baselines. Currently the paper has only made comparisons against homography estimation from local feature maches in Table 3. However there are many more homography estimation methods that could additionally be tested, for example 'Deep Image Homography Estimation' (which is also introduced in the related works section). Adding more homography-based methods for comparisons will better validate why the proposed recurrent homography estimator is needed. Related, how are the homographies estimated from SuperGlue, LoFTR matches? Have the authors used a RANSAC-style estimator to handle outliers? It seems that the paper is currently missing details on this part, which is crucial for local feature matching-based methods to work effectively. 2. I have not fully grasped the reason why the method performs poorly for rotation estimation in KITTI. Is this due to not using the orientation data labels during training? A better clarification of this phenomenon will be helpful for readers to better understant the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the 'weaknesses' section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not explicitly discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1.1: Comparison with other homography-based methods.** **A1.1:** Thank you for your constructive suggestion. In the introduction of our paper, we highlight that two significant challenges arise during homography estimation in the cross-view localization task. The first challenge pertains to the substantial presence of occlusions and ambiguous content in the scene (as indicated in L52-L53), while the second challenge is the lack of compact supervision information for homography estimation, i.e., a minimum of four matching point pairs (as stated in L59-L60). These challenges are why we did not employ other homography estimation methods. For instance, methods like "Deep Image Homography Estimation" require complete supervision. We also considered the use of unsupervised methods, where such techniques generally compute the similarity between two images to obtain a loss. However, our early experiments demonstrated that L1 loss (photometric similarity loss) and SSIM loss (Structural Similarity loss) were not very effective. Furthermore, our envisioned application scenarios, such as autonomous driving and robot navigation, demand high real-time performance from the network. Hence, methods like those mentioned in the "Related Work" section, which are based on GANs and transformers and involve substantial computational overhead, are not suitable. One of the most pivotal attributes of our proposed homography estimator lies in its correlation-aware mechanism. Our choice of the Correlation-Aware Homography Estimator is inherently well-suited to our task's demands, encompassing aspects such as weakly supervised training, adeptness in handling occluded scenes, and real-time capabilities. However, we will explore testing other advanced unsupervised Homography Estimation networks in the future. **Q1.2: How are the homographies estimated from SuperGlue, LoFTR matches?** **A1.2:** We employed the RANSAC method, specifically applying the findHomography function from the OpenCV library on the matches obtained from SuperGlue and LoFTR, utilizing the cv2.RANSAC parameter. We will add relevant information to the paper **Q2: Performance of rotation estimation in the KITTI dataset** **A2:** Thank you for the thoughtful question. Indeed, our method does not utilize orientation labels, unlike the other methods we compared against in Table 2 of our paper. L204-L206 in the main paper detail how we estimate orientations. Our experimental results highlight that our model can estimate orientation without orientation labels during training, particularly showing a marked improvement in estimation accuracy on the VIGOR dataset compared to previous methods. The diminished performance in orientation estimation on the KITTI dataset may be due to the presence of only front-view images, in contrast to VIGOR's panoramic images. This limited field of view in KITTI restricts the available information, leading to suboptimal performance. For instance, the projected BEV image covers less than one-fourth of the corresponding satellite image, as depicted in Fig 4(d), compared to over two-thirds in the VIGOR dataset, as shown in Fig 1(d). We have also tried using orientation labels to guide our model training, employing L1 loss of orientation error, which has led to substantial improvement in orientation estimation on the KITTI dataset. The table below illustrates the enhanced orientation performance compared to training without orientation labels. Even without meticulous hyper-parameter tuning, the orientation estimation has improved significantly. Specifically, under the "Same" and "Cross" settings, the mean orientation estimation error has decreased by 3.13 degrees and 3.92 degrees, respectively. We have also updated Table 2 to include orientation supervision; please refer to Table 1 in the PDF of the global response. |Area|Ori. Label| ↓Mean| ↓Median | ↑R@1$^\circ$ | ↑R@5$^\circ$ | | :--- | :---: | :---: | :---: | :---: | :---: | | Same| w/o. | 3.93 | 3.34 | 15.45 | 70.05 | | Same| w. | 0.80(↓ 3.13) | 0.61(↓ 2.73) | 71.87(↑ 56.42) | 99.62(↑ 29.57) | | Cross| w/o. | 7.10 | 4.60 | 11.78 | 53.23 | | Cross| w. | 3.18(↓ 3.92) | 1.94(↓ 2.66) | 27.83(↑ 16.05) | 82.21(↑ 28.98) | --- Rebuttal Comment 1.1: Comment: Thanks for the additional set of experiments and clarifications. I am willing to keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer NnZj, Thank you very much for your recognition and your recommendation for acceptance. We appreciate your thoughtful review. Should you have any further questions or concerns, please don't hesitate to reach out. We are here to respond to your inquiries, as well as those of the other reviewers. Best regards, The Authors.
Rebuttal 1: Rebuttal: We extend our sincere thanks to all reviewers for their insightful feedback. Our method has been recognized as "interesting" (Reviewer Ak83, ZVTo), "new" (Reviewer NjYP), and "intuitive" (Reviewer NnZj). We are gratified that the "large amounts of improvements" (Reviewer NnZj), "state-of-the-art performance" (Reviewer NjYP), and "promising results" (Reviewer ZVTo, Ak83) of our method, along with the "comprehensive evaluation" (Reviewer Ak83), have been collectively acknowledged. The clarity of writing, as well as the presentation of figures and tables, have also been commended (Reviewer NnZj, Ak83). We have addressed individual comments in the following sections and hope our responses satisfy any concerns. --- **For Reviewer NnZj and Reviewer ZVTo** In Table 1 of the newly uploaded PDF, we have included an updated version of Table 2 from our main paper, utilizing orientation labels (without meticulous hyper-parameter tuning). This update demonstrates significant improvement in orientation estimation. We will include relevant details in our revised paper. We would like to draw your attention to the SliceMatch metrics, which were directly borrowed from the original paper. However, the two red-marked values are likely erroneous, as they are identical. The values for R@$1^{\circ}$ and R@$5^\circ$ should not be equal, and based on the comparison with our method on other metrics, it is likely that the value for R@$1^{\circ}$ is incorrect. We suspect this value may have been copied inaccurately. Additionally, it's worth noting that CCVPE is our concurrent work and has not yet undergone peer review. Compared to SliceMatch (CVPR 2023), our model consistently exhibits substantial improvements across all metrics. --- Thank you once again for all your constructive insights! Pdf: /pdf/94678d7b9f0e1983892072789117e9c9d16221dd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On Separate Normalization in Self-supervised Transformers
Accept (poster)
Summary: The paper proposes a simple modification to Transformer-based self-supervised learning by having distinct normalization (parameters) for the CLS token, which captures the global information for use in downstream tasks, and the remaining tokens. The motivation is that this allows the tokens to avoid dimension collapse as indicated by a lower uniformity score, which in turn correlates with higher downstream performance. The paper verifies in a case study that this is empirically the case. The proposed method, SepNorm, is evaluated in a controlled setting against the standard approach, ShareNorm, on various datasets from different domains. SepNorm consistently achieves improvements. An additional ablation study confirms that this is also the case when uniformity is encouraged explicitly through an additional loss term. Strengths: * The paper proposes an extremely simple, yet consistently effective solution to mitigate the problem of dimensional collapse and thereby improve performance. * The paper is concerned with general self-supervised Transformer architectures, which are relevant to the majority of the NeurIPS audience. * The experiments are very clear and well executed, the controlled experimental setup leaves little room for doubt regarding the method's merits. * The paper is generally self-contained, all necessary pieces can be understood by a general ML-savvy audience. * The idea to use separate sets of learnable parameters across different components of the model is of course not new per se, but the application context and the resulting insights constitute sufficient novelty. Weaknesses: The only main weakness is that the paper could be structured better. Sections 3.1. and 3.2 mix the description of the proposed method with experimental results that are supposed to motivate the method. However, those results already rely on the existence of the proposed method, and do thus not suit as motivation. Rather, they constitute useful analyses of the proposed method, which could just as well be discussed in the "Experiments" section. This has the added benefit that the datasets used for generating results in Figures 3 and 4 have already been introduced (which only happens in 5.1). In fact, section 3.1. and 3.2. never explicitly mention that these are computer vision datasets. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * In section 4, alternative normalization methods are discussed. In how far do these address the dimensional collapse problem? How would your results change with different normalizations, i.e., in how far is your method orthogonal to these? * Table 2: It is unclear what you mean by BERT-base and RoBERTa-base. Do these refer to the original models, or are these finetuned in the same way as the "+SepNorm" models? If it's the former, how do we know that the improvement isn't merely a result of continued finetuning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper does not address limitations. I think it would be useful to discuss whether the results would extend to other types of normalization (see question above), and what potential avenues for future work are. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive feedback on our paper, as well as the suggestions that can improve our submission's quality. Below we try our best to address those concerns and questions from the reviewer. ## [Weakness] **Q1: The only main weakness is that the paper could be structured better. Sections 3.1. and 3.2 mix the description of the proposed method with experimental results that are supposed to motivate the method. However, those results already rely on the existence of the proposed method, and do thus not suit as motivation. Rather, they constitute useful analyses of the proposed method, which could just as well be discussed in the "Experiments" section. This has the added benefit that the datasets used for generating results in Figures 3 and 4 have already been introduced (which only happens in 5.1). In fact, section 3.1. and 3.2. never explicitly mention that these are computer vision datasets.** R1:Thank you for your feedback! We will restructure the methodology section as suggested to enhance the clarity and coherence of our paper. We believe your suggestion will improve the readability of our work. ## [Questions] **Q1: In section 4, alternative normalization methods are discussed. In how far do these address the dimensional collapse problem? How would your results change with different normalizations, i.e., in how far is your method orthogonal to these?** R1: Thank you for your insightful question. Our method is entirely distinct from previous normalization approaches in terms of both its motivation and methodology. Furthermore, our method can be seamlessly integrated with any normalization approach. As far as we know, the alternative normalization methods discussed in Section 4 do not specifically aim to tackle the dimensional collapse problem; instead, they primarily focus on stabilizing the training process of transformers. To better understand how our method can complement alternative normalization techniques, consider the following insights: 1. **For non-contrastive self-supervised training**, certain normalization methods like BatchNorm and PowerNorm can also promote uniformity. If PowerNorm demonstrates better training process stabilization than BatchNorm, we recommend using PowerNorm. 2. **For contrastive self-supervised training**, it’s empirically shown that LayerNorm may perform better than BatchNorm [1] as it does not utilize batch statistics. This observation may also extend to PowerNorm. It is worth noting that we have not discussed other normalization methods such as GroupNorm and PairNorm since they are not explicitly designed for transformers. We hope that this response addresses the reviewer's concerns. Please feel free to inquire about any other questions you may have. **Q2: Table 2: It is unclear what you mean by BERT-base and RoBERTa-base. Do these refer to the original models, or are these finetuned in the same way as the "+SepNorm" models? If it's the former, how do we know that the improvement isn't merely a result of continued finetuning?** R2: We apologize for the confusion, “+SepNorm" denotes BERT-base with SepNorm, and BERT-base denotes BERT-base with ShareNorm. Both models are finetuned in the same way to make a fair comparison. We have corrected the table in the attached PDF, all the results will be included in the revised script. ## [Limitations] **Q1: The paper does not address limitations. I think it would be useful to discuss whether the results would extend to other types of normalization (see question above), and what potential avenues for future work are.** R1: We thank the reviewer for the insightful questions. A common limitation of such innovations of network layers is the increased complexities of model tuning: there are more hyperparameters to set. However, we hope the extensive benefit shown in this paper will help the community to settle for a default choice of SepNorm. We will add a paragraph discussing the limitation of the proposed method. In terms of extending to other types of normalization, we kindly refer the reviewer to [Questions]-Q1 for the discussion of pairing with other types of normalization. Regarding potential future works, we provide two possible avenues in the following: 1. Relationship between [CLS] and non-[CLS] tokens, and potential architecture improvement: A promising area for future work is exploring the relationship between the [CLS] token and non-[CLS] tokens. Our ablation study indicated that achieving better uniformity in the non-[CLS] tokens could positively impact downstream performance. Additionally, it remains unknown whether dimensional collapse issues exist in token-level tasks such as generation, object detection, etc. If there are any, we suggest investigating whether improvements to transformer designs can address these issues and further enhance performance. 2. Extension to decoder-only large language models: While our current work focuses on encoder-only transformers, we recognize the growing popularity of large language models utilizing decoder-only transformers. To gain a comprehensive understanding of representation distributions and the role of uniformity in generative tasks, it is essential to extend our analysis to these decoder-only architectures. Exploring how uniformity influences the performance of generative tasks in decoder-only transformers may also provide valuable insights into their behavior. We will revise our manuscript to incorporate those discussions suggested by the reviewer. Thank you for the valuable comments. --- Rebuttal Comment 1.1: Comment: Thank you for discussing limitations and showing the will to improve the readability of the paper. Unfortunately, since the discussion period doesn't allow to update the draft, I can not improve my score based on promises. However, I think the work is fine either way, so I'll keep the score.
Summary: The paper introduces SepNorm, a normalization technique for Transformer models. SepNorm separates the normalization of the [CLS] token from the rest of the tokens in a sequence, a departure from the traditional ShareNorm method that normalizes all tokens together. Strengths: 1. By applying SepNorm, the embeddings of the [CLS] symbol effectively captures the characteristics of the entire graph, leading to improved results in downstream applications. 2. SepNorm achieves better uniformity by encouraging uniformity for both the [CLS] and token embeddings when used in contrastive methods. 3. SepNorm seems to do well in the evaluation. Weaknesses: No uncertainty/confidence/error bars on experimental results or significance testing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would SepNorm be helpful for multilingual machine translation tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No visualization of the learned representation from before and after applying SepNorm Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the suggestion on our paper, below we address your concern accordingly. ## [weakness] **Q1: No uncertainty/confidence/error bars on experimental results or significance testing.** R1: As suggested, we re-run the experiments to obtain the standard derivations in the NLP tasks as it is faster to train compared to the tasks in CV and graph domains. Moreover, we've also included the experiment results from the supervised training setting. We kindly ask the reviewer to check the results in the attached PDF. ## [Questions] **Q1: Would SepNorm be helpful for multilingual machine translation tasks?** R1: Thank you for the insightful question! SepNorm can be better used in discriminative models rather than generative models. BERT originally proposes [CLS] token and is usually used for non-autoregressive transformers (the encoder part in [1]). We didn’t further investigate its use case in decoder-based transformers. Since machine translation usually requires an encoder-decoder model to perform conversion between two languages, it may not be an ideal scenario for applying SepNorm. However, SepNorm can be used for multilingual-related tasks such as alignment [2]. We will further discuss the suitable scenarios for using SepNorm in the future revised manuscript. ## [Limitations] **Q1: No visualization of the learned representation from before and after applying SepNorm.** R1: Thank you for your feedback. In response, we’ve visualized the learned representation of the STL-10 dataset for both ShareNorm and SepNorm. We use t-SNE to reduce the dimension from 784 to 2 before visualization. Class label of each data point are highlighted with different colors. We kindly refer the viewer to the attached PDF for the visualization result. ## [References] [1] Vaswani, Ashish, et al. Attention is all you need. NIPS 2017 [2] Cao, Steven, et al. Multilingual alignment of contextual word representations. ICLR 2020 --- Rebuttal Comment 1.1: Title: Comments after rebuttal Comment: Thank you for your response, which included re-running the experiments to obtain the standard derivations in the NLP tasks and the visualization of the learned representation before and after applying SepNorm. This shows the robustness of SepNorm. For this reason, I am willing to revise my score up.
Summary: The authors propose to use a different normaliser to the [CLS] token and the rest of the tokens for masked autoencoders. They motivate this as an improvement on the standard normalisation and combine it with contrastive uniformity loss. They observe some improvements in classification tasks . Strengths: The pape is proposing a new normalisation of MAE using 2 separated normalisers one for [CLS] token and another for other tokens. They provide evidence this helps the models to obtain better perfomance indepndently and in combination with uniformity loss. Weaknesses: The contribution while important doesn't seem relevant for a larger audience but they do per discussion below for several tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: While the [CLS] token stands among others, it looks like this can be an extreme of different normalisation depending on the frequency. Have authors compared how much of the improvement comes from the token frequency vs how much comes from the semantic full sentence encoding of the token ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have studied some ablation studies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your effort in reviewer our submission. Below we answer the concerns and questions that you raised, please feel free to ask if you have further questions. ## [weakness] **Q1: The contribution while important doesn't seem relevant for a larger audience.** R1: Thank you for your feedback. We feel that our proposed method can actually benefit a broad class of audiences in the machine learning community. Our proposed plug-and-play component, SepNorm, has the potential to benefit diverse domains. Our experiment demonstrated that it can be easily applied to various transformer-based architectures (e.g., MAE, BERT, Graphormer) used in different domains (CV, NLP, Graph). By incorporating SepNorm, researchers, and practitioners can improve the performance of these architectures without extensive modifications. We will enhance the clarity and presentation of our findings in the revised version of the paper. ## [questions] **Q1: While the [CLS] token stands among others, it looks like this can be an extreme of different normalisation depending on the frequency. Have authors compared how much of the improvement comes from the token frequency vs how much comes from the semantic full sentence encoding of the token?** R1: Regarding the "frequency" aspect, are you referring to the occurrence frequency of individual words within the corpus? To rephrase your question, are you asking whether it's necessary to normalize the most frequently appearing word separately or if the special token requires its own normalization? Our response is that the unique [CLS] token necessitates distinctive normalization for two primary reasons. First, the vector representation of the [CLS] token is directly employed in downstream tasks. Second, this vector effectively encodes the comprehensive sentence semantics, desiring a distinct treatment. We are open to providing further clarification if our interpretation of your question is not accurate. --- Rebuttal Comment 1.1: Comment: Thanks for the additional experiments, the nice work and the time devoted to clarifying the points of concerns. ## [weakness] ### Q1: The contribution while important doesn't seem relevant for a larger audience. ### R1: Thank you for your feedback. We feel that our proposed ... *Reviewer's Answer*: Apologies. I think, in hindsight that I should have added more information into that comment. I didn't mean the practical implications of this technique will not be interesting or relevant but rather that the paper may -- from one's person point of view which could potentially be biased -- not be relevant for a large audience for long time w/o follow-ups. The other comments on the review attempt to show glimpse of why. I hope my view is clarified and of course the results that with your hard work and diligence have been collected are surely relevant and interesting for part of the community. In case of doubt, please note that this has not affected the overall rating, for which I focused on the content itself, given the subjective and risky nature of that comment. ## [questions] ### Q1: While the [CLS] ... ### R1: Regarding the "frequency" aspect, .... *Reviewer's Answer:* The main question can be restated as which are the causes and not the effects of the SepNorm approach ? the analysis seems to be focused on uniformity. Given the heuristic nature of the proposed solution and the Uniformity effect analysis, I think other heuristic solutions comparison based on statistical analysis might be beneficial. Note that because Normalisation layers essentially standardise the input, the improvement can come from a purely statistical effect of taking out statistical outliers, or recursive mean computation, ... . For instance, one could simply split the tokens into top-k frequent events and less frequent events which might show different distribution behaviour. Since the token [CLS]/[VNode] is a single token and occurs at each training example, it can be biasing the distribution statistically because of the frequency. Alternatively, if the [CLS] activation is the average of other activations, then the Normalization layer might not be able to adapt during training to those dynamics, where x_0 = mean(x1...xN). Note as well that if the sequence lengths are in average 128 and we have N training examples, then there is a strong bias against the [CLS] token activation vs other frequent tokens such as "th" or "a" --among others--; or even 1/128 if all other tokens do share activation distribution except for [CLS]. Consequently, one could scale the contribution of [CLS] proportionally. Furthermore, we could erase the frequency from the update by making parameters depend on the frequency, ... etc ... These are some of the thoughts, I was considering while writing the main question. Essentially, the SepNorm is simply one solution to the approach that is subsumed in other options when trying to find the causes/solutions for the [CLS] behaviour. Note I have intentionally dropped [SEP] from discussion about. BTW, What about the [SEP] token ? would it be beneficial to split it out as well ? Thanks !! --- Reply to Comment 1.1.1: Title: Thank you for your comment! Comment: ### **Reviewer's explanation on the contribution comment** #### *Author response*: Thank you for the response, and we appreciate that you recognize our hard work. We agree that our proposed reasoning may not be unique. However, we would like to emphasize the importance and relevance of this problem (self-supervised transformers, which have been the de facto choice of different research communities for learning from unlabeled data, as demonstrated by our experiments in NLP, CV, and Graph). We also hope that by observing and interpreting the problems, more people would notice this phenomenon and (potentially) other related phenomena in self-supervised transformers. We believe our observation and explanation are self-contained, and we propose a simple yet effective approach to mitigate the issue and demonstrate the empirical performance gains across various tasks. We will add more discussions in the future work section to point out potential extensions and follow-ups of our work. ### **Reviewer's clarification on the frequency question** #### *Author response*: Thank you for your clarification! **Here is our short response**: We advocate treating [CLS]/[SEP] as different variables for other natural tokens.(by natural, we mean the token is originally in the dataset and not introduced heuristically, such as [CLS],[SEP]). But we do not think treating natural tokens differently based on their behavior (for instance, their frequency) is a good idea. While SepNorm works well in promoting representation quality, we agree that it is not the only solution separating the two kinds of variables, and other promising solutions may exist. We’d like to leave such work in the future. We think that apart from SepNorm, another contribution of our work is the finding that the two types of tokens should not share the same embedding space (through uniformity analysis). We hope the reviewer could agree on this. Below we further elaborate on the reviewer's comments: **(We advocate treating [CLS]/[SEP] as different variables (outliers) against other natural tokens.)**\ Your observation that “the improvement can come from a purely statistical effect of taking out statistical outliers, or recursive mean computation” is insightful. Your assumption that the [CLS] embeddings (outliers) do not belong to the intrinsic distribution of the other token embeddings aligns with our motivation. Our approach takes a further step and assumes those “outliers” belong to another distribution and access the uniformity of the “outliers” distribution. **(treating natural tokens differently (as outliers) based on their behavior (for instance, their frequency) & biased fitting)**\ However, we do not feel it’s reasonable to identify “outliers” based on their frequency. Since stopwords like “the” and “a” are frequently observed in the data. Statistically, their high frequency will not bias the fitting of the normalizing parameter because such behavior is the nature of the data. On the contrary, adopting different treatments to those high-frequency may result in biased fitting. Such an argument does not apply to the [CLS]/[SEP] tokens! Since they are heuristics and not originally from the data. We further provide an ablation study that performs SepNorm on stopwords (collected from NLTK) such as “a”, “the”, and “are” tokens. The performance gain is marginal (74.04&rarr;74.18 on unsup. STS with BERT model). **(Simple treatment that reweights contribution between [CLS] and other tokens)**\ The remark made by the reviewer that “one could scale the contribution of [CLS] proportionally (when updating the normalization statistics) to improve model performance” is a possible approach. Intuitively, such an approach can determine “the 'volume' the [CLS] token and the other tokens should occupy” by tunning the contribution coefficient. To do this, we increase the weight of the [CLS] token by L/2L (L=256) when optimizing the normalization statistics. Such an operation leads to a performance gain (see table below), but not as good as SepNorm. Moreover, further increasing the important weight might result in performance drops. We believe such an operation requires careful parameter selection to balance the two types of tokens. SepNorm directly separates them into two embedding spaces, ensuring they won’t “compete” for space. | | ShareNorm | ShareNorm(L)| ShareNorm(2L)|SepNorm| |-|:-:|:-:|:-:|:-:| |**Accuracy**| 92.01 |92.75| 91.88| 93.84| **(SepNorm on [SEP] token)**\ In response to the last comment about the [SEP] token, there is no harm in introducing SepNorm to it. *We believe such an operation may help promote the uniformity of the natural token embedding but the downstream improvement gain might be marginal*. Because [CLS] token may benefit less from it since it already has its normalization layer. We thank you again for the clarification, and we hope we've answered all the questions. Please do not hesitate to ask if there are more questions.
Summary: The paper proposes a new method called SepNorm, which utilizes separate normalization layers for the [CLS] token and the remaining tokens, replacing the conventional single normalization layer (ShareNorm) in transformers. Experiments demonstrate the importance of applying separate normalization to the [CLS] token and the remaining tokens, and show that SepNorm enhances the transformer's ability to encode the input. Strengths: The paper propose to explore separate normalization for the [CLS] token in pretrained transformers. Experiments on three different domains show the effectiveness of the proposed method. Weaknesses: The paper mentions that "Our method aims to alleviate the potential negative effects of using the same normalization statistics for both token types, which may not be optimally aligned with their individual roles." It seems that this statement is not empirically validated in the paper. In the ablation study, only experiments on a CV dataset have been conducted. It would be better if more experiments on NLP and Molecule Discovery datasets could be carried out. The paper only selects one or two models for each domain in the experiments, such as evaluating the STS task in the NLP domain. However, this limited selection of tasks is not sufficient to demonstrate the generalizability of SepNorm. It would be more appropriate to include a wider range of experiments across multiple tasks to provide a comprehensive evaluation of the effectiveness and generalizability of the proposed SepNorm method. The paper does not empirically compare the proposed method against previous works, such as Powernorm mentioned in the section of related work. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can the proposed method be scaled to large language models? 2. How is the speed of SepNorm in comparison to ShareNorm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper does not discuss the limitations of the proposed method. It would be better to discuss the potential impact of the proposed normalization method on learned representations, training time in addition to the performance on the downstream tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## [weakness] **Q1: the statment "Our method aims to alleviate the potential negative effects of using the same normalization statistics for both token types, which may not be optimally aligned with their individual roles." is not empirically validated in the paper.** R1: The potential negative effect we refer to is **the relatively low uniformity score achieved by using ShareNorm**. Uniformity has been widely used to assess the degree of dimension collapse and is often considered an essential metric for learned representation quality [1,2,3]. We quantitatively showed that the uniformity for ShareNorm is consistently worse than SepNorm, regardless of the emphasis that is placed on the metric. We will modify our statement accordingly to avoid such confusion. **Q2: It would be better if more experiments on NLP and Molecule Discovery datasets could be carried out.** R2: Thank you for your comment. As you suggested, we expanded our experiments to cover a broader range of NLP datasets. Due to time constraints, we are not able to explore on Molecule tasks. We will consider incorporating more Molecule datasets in the future. Below, we brief how we further validate our proposed method in various NLP datasets/tasks. * Models: We use Bert and RoBERTa models. * Input variants: We finetune BERT and RoBERTa on multiple datasets using three input settings, using either the [CLS] or [MASK] tokens for sentence-level tasks. For [MASK] predictions, an actual word is predicted, and its semantic meaning determines the label, e.g., "terrible" indicates a negative label. 1. Standard: Input is "[CLS] <Sentence> [EOS]", we predict labels via [CLS] embeddings. 2. Prompt-based: Input is "[CLS] <Sentence>. This is [MASK] [EOS]", we use [MASK] embeddings for predictions. 3. Prompt-based with demo: Input is "[CLS] <Sentence1>. This is [MASK]. [SEP] <demo1>. This is <label1>. [SEP] <demo2>. This is <label2> [EOS]", we use [MASK] embeddings for predictions. Demos are drawn from training data. * Datasets: We use the following datasets: Stanford Sentiment Treebank 2&5 (SST-2 & SST-5), Movie Reviews(MR), Customer Reviews(CR), Multi-Perspective Question Answering(MPQA), Subjectivity Dataset(SUBJ), Corpus of Linguistic Acceptability(CoLA), Text REtrieval Conference(TREC), Stanford Natural Language Inference(SNLI), Question Natural Language Inference(QNLI), Microsoft Research Paraphrase Corpus(MRPC), Quora Question Pairs(QQP). * Results: We report the result in the uploaded pdf in the general response. As we can observe, applying SepNorm on [CLS] can help improve the model performance with different prompting methods. We hope the additional experiment can address the reviewer’s concern about our method’s effectiveness. **Q3: Include a wider range of experiments across multiple tasks to comprehensively evaluate the effectiveness and generalizability of SepNorm.** R3: Thank you for the feedback. In response, we've extended our evaluation to multiple transfer tasks, including MR, CR, SUBJ, MPQA, SST-2, TREC, and MRPC. To do this, we freeze the model weights learned from SimCSE and only train the classifier on top of the model. We kindly refer the reviewer to the attached pdf for more details on the experiment results. **Q4: The paper does not empirically compare the proposed method against previous works, such as Powernorm...** R4: We’d like to point out that we do not propose a new normalization layer. Instead, we propose a normalization strategy that can be combined with different normalizations such as BatchNorm, LayerNorm, and PowerNorm. Since there are many combinations of different normalization layers in SepNorm, we simplify the setting and focus on the canonical case. ## [questions] **Q1: Can the proposed method be scaled to large language models?** R1: Our method can be applied to large language models that utilize the [CLS] tokens. For example, CLIP aligns the [CLS] embeddings from image and text via contrastive learning. SepNorm can be incorporated into the text and vision transformers. **Q2: How is the speed of SepNorm in comparison to ShareNorm?** R2: With SepNorm, the model needs no more computation but only stores separate mean and variances with a little extra space. In practice, the computation cost of replacing ShareNorm with SepNorm is negligible. For example, for training an MAE on STL-10 dataset in the machine with one A100 GPU with 32 CPU cores, both normalization methods took 36 hours to finish. ## [limitations] **Q1: It would be better to discuss the potential impact of the proposed normalization method on learned representations, training time in addition to the performance on the downstream tasks.** R1: Thank you for the feedback. 1. Learned Representations: We quantitatively evaluated representation uniformity, noting that better uniformity indicates better data information preservation[1]. Our qualitative analysis in Figure 2(a,b) shows that SepNorm results in a more uniform [CLS] feature distribution around a mean of 0. Moreover, the slower singular value decay in Figure 2(d,c), suggests that SepNorm can better span the feature space, leading to better representation. 2. Training time: The replacement of SepNorm does NOT incur extra computation cost. The extra computation here is to perform another normalization operation over the feature, which is affordable compared to the self-attention modules. 3. Performance on the downstream tasks: We verify SepNorm on downstream tasks from three domains. For example, we pretrain the transformers with ShareNorm/SepNorm in the CV and graph domains and finetune them on the corresponding downstream tasks (results are reported in Table 1, 3 in Section 5). ## [Reference] [1]Wang, et al. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. ICML 2020. [2]Bojanowski, et al. Unsupervised learning by predicting noise. ICML 2017. [3]Mettes, et al. Hyperspherical prototype networks. NIPS 2019. --- Rebuttal Comment 1.1: Title: Thanks for providing new experiment results Comment: The provided new results partially address my concerns. For this I'd like to increase my score. But some results are mixed and the method is quite limited to the scenario with [CLS] token. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We are willing and are happy to provide more results if you have other comments on the our additional experiment. Regarding the comment that our proposed method has limited application, we believe the use of [CLS] token is quite universal, which could benefit various domains just as illustrated as in the experiment.
Rebuttal 1: Rebuttal: We thank all reviewers for their effort in reviewing our submission. And we appreciate all the positive and negative feedbacks on our manuscript. We've tried our best to address all the concerns and questions raised by the reivewers. The attached pdf includes the additional experiments and visualization asked by reviewer QRJ8 and hvnd. Thanks again for the valuable feedbacks. Pdf: /pdf/30eaaee046bc98936320f49fc0f8862cd9ddb8d5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper analyzes the normalization layer that is applied to both the context tokens and the [cls] token in an input. It argues that traditional normalzation that is applied to both type of tokens, dubbed ShareNorm, is not effective since they have different roles. The paper proposes to use separate normalization layers for these tokens, SepNorm, and conducted a series of analysis. Echoing prior work, they showed that a measure of uniformity of the representations correlates with downstream classification performance, and further that ShareNorm find hard to tackle this uniformity deficiency even with uniformity fixing methods, while SepNorm improves uniformity and hence downstream task performances. Strengths: 1. The paper is well written and easy to understand. It starts with sufficient background and explains the analysis clear. 2. The hypothesis intuitively makes sense to me, and the empirical analysis provides good support. 3. Experiments are conducted on a number of tasks in vision, nlp and graphs. Weaknesses: I don't have major concerns on this work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: no limitations were discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are sincerely grateful for your time and effort in reviewing our submission. We greatly appreacited your positive feedback on our proposed method, and we are glad to know that you found the paper well-written, easy to understand. Once again, thank you for your kind words and for accepting the paper. --- Rebuttal Comment 1.1: Comment: Thanks!
null
null
null
null
null
null
IBA: Towards Irreversible Backdoor Attacks in Federated Learning
Accept (poster)
Summary: The paper introduces a two-phased backdoor injection framework, called IBA, for Federated Learning systems. IBA incorporates an adaptive trigger generation mechanism along with a gradual implantation process to insert stealthy backdoors into the global model. IBA enhances the efficiency and durability of the attack through selective poisoning of specific model parameters. Through evaluation using multiple datasets, IBA demonstrates high success rates and outperforms existing backdoor injection methods even in the presence of several defense techniques. Strengths: The paper is well-written and easy to follow. It addresses a crucial problem in federated learning. Weaknesses: The paper has several limitations which are discussed below: The primary contribution of the paper is to use adversarial examples for generating adaptive backdoor triggers and selective parameter poisoning for injecting durable backdoors. However, a quick search resulted in the following two papers: [1] M. Alam et al., "PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations", arXiv 2022. [2] Z. Zhang et al., "Neurotoxin: Durable Backdoors in Federated Learning", ICML 2022. It is evident from these papers that [1] uses adversarial examples as backdoor triggers, and [2] uses selective parameter poisoning for durable backdoors. On top of that, [2] performs similar operations as the authors do to increase the durability of backdoors. The authors should elaborate on the distinctiveness and novelty of their approach compared to these preceding works. IBA utilizes adversarial examples as backdoor triggers, which are known for their input specificity. The paper could benefit from a more comprehensive explanation of how IBA addresses this specificity—questions such as whether triggers are unique to each input or shared across a class. Also, it needs to be elaborated more on whether the trigger generation function must be trained at each training round. If so, what is the computational overhead for that? IBA is not evaluated against important state-of-the-art defenses like FLAME[3], SparseFed[4], etc. IBA should be compared against other state-of-the-art attacks like 3DFed[5]. IBA should be evaluated using more practical benchmark federated learning datasets in the LEAF project[6]. The evaluation of IBA considering the fixed-pool case should be included as the main content of the paper. The reported accuracy of 98.09% on the CIFAR-10 dataset and the subsequent 14% drop in accuracy in Table 1 raise skepticism, especially in comparison to the marginal drop observed for T-Imagenet. The authors should provide clarification and insights into this discrepancy. In addition, there appears to be a mismatch between the content of Table 1 and its explanation in Section 4.2 (1). The authors need to ensure consistency and clarity in the representation and discussion of results. The authors should provide a detailed explanation for specific results, such as why the combination of IBA and PGD yields an accuracy of 18.74% for MNIST and RLR in Table 2 and other similar cases. With such low accuracy, why is IBA deemed to bypass every defense mechanism? The authors mention that "This trigger is expected to bypass human and machine inspections" but fail to discuss the metrics or criteria used for this assessment. Figure 1 lacks an adequate explanatory description. A more thorough discussion will enhance the clarity and comprehensibility of the figure. To provide more insights, it would be beneficial to depict results on more complex color images in Figure 4 instead of using a monochromatic MNIST dataset. The authors should discuss the significance of using FedProx and FedNova, and clearly describe how these differ from FedAvg. This would help in understanding the rationale behind their inclusion. What does the anomaly index represent in Figure 6? NeuralCleanse is mentioned as a defense mechanism against backdoor attacks in FL, but it was not originally designed for the FL framework. Clarifying its relevance and applicability in this context would be beneficial. [3] T. Nguyen et al., "FLAME: Taming Backdoors in Federated Learning", Usenix Security 2022. [4] A Panda et al., "SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification", AISTATS 2022. [5] H. Li et al., "3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning", IEEE S&P 2023. [6] https://leaf.cmu.edu/ Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please address the issues discussed in Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The authors have discussed the limitations and potential negative societal impacts of the paper. However, the paper has several other limitations, as discussed in the review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable comments. Additional experiments are at [**EXP**](https://files.fm/f/r67m5e7a3). 1. **Compare with PerDoor and Neurotoxin** PerDoor uses the Basic Iterative Method to generate the trigger and relies on gradients of the loss w.r.t input. Alternatively, we learn a generative model to generate the trigger that blends well with the underlying data distribution. Moreover, the collaborative clients learn their own generative models but aggregate them into one shared model in the next rounds. PerDoor lacks this sharing mechanism, which makes the attack more effective and durable. While the idea of gradient masking is explored in~\cite[36, 37], our usage of historical information to determine the gradient mask is novel. Our use of a historical gradient mask helps reduce the bad effect of the fluctuation of the global model on finding the optimal gradient mask to poison, especially for non-iid clients with skewed data distributions (may temporarily distort the global behavior). The additional experiments in EXP show the superior performance of IBA's poisoning compared to neurotoxin. 2. **Computation overhead** As presented, $T(x) = x+G(x)$, $\|G(x)\|$, $x$, for each input $x$, and G learns to generate an input-aware trigger. However, G is updated after the local model’s training is completed, which does not cause a delay in local model submission. In other words, the computational overhead can be easily alleviated in a different thread or outside of FL training. 3. **Evaluation against FLAME/SparseFed** We performed additional experiments with the FLAME in EXP, and show that even stand-alone IBA can get around it. IBA achieves 99% BA on MNIST for both cases of K=5 and K=20. Our considered threat model is different from that of FedSparse, where the server does not send the identical global model to each device. 4. **Comparison with 3DFed** We tested IBA against the SOTA 3DFed [5]. We evaluated the method's durability post-attack over 200 rounds (see Figure 3 of EXP). Notably, after 50 epochs and reaching round 250, 3DFed model's ability to learn backdoor tasks deteriorated significantly. Accuracy dropped from 85.98% to 9.69% on MNIST and from 75.92% to 6.06% on CIFAR-10. However, IBA still has superior resilience. Even at round 450, accuracy remained below 1% for MNIST and 5% for CIFAR-10. IBA outperformed both 3DFed and DBA, which exhibited a gradual accuracy decline beyond round 450. 5. **Evaluation with LEAF** We employed GitHub's "Attack of the Tails," as the foundation for implementing and evaluating IBA, to ensure a reliable evaluation within an established framework. However, assessing IBA in more FL datasets and benchmark is an interesting future direction of our work, as suggested. 6. **Evaluation of IBA considering the fixed-pool case** We will revise this part in the later version, as suggested. 7. **Accuracy drop on CIFAR** We confirm that the MA of CIFAR-10 (baseline case w/o. attack) should be 84.71% (a typo in the current version). We will revise it accordingly. 8. **Explanation for specific results** While Krum and RLR defend the proposed attack to some extent, i.e., we observe the drop in the BAs of IBA, we argue that Krum and RLR tend to reject previously unseen information from both adversary and honest clients, which results in low MAs (a drop of 3–10%). Specifically, the proposed IBA causes RLR to compensate for the backdoor defense’s effectiveness with the main accuracy drop in the MNIST dataset by unnecessarily reversing the learning rate of specific dimensions, i.e., the BAs are around 90% (less than other defenses 10%). Therefore, these two defenses raise concerns about their practicality. 9. **Metrics for assessing bypassing human inspections** We follow WaNet to perform a similar human evaluation experiment: we tell the testers (a cohort of 50) the mechanism of the attack (e.g., how the trigger is created), present to a tester a clean image and its corresponding backdoor image (with the trigger) without revealing the images’ identities, and ask them to identify the backdoor image. The result shows that the testers cannot distinguish which images are clean and backdoor, suggesting that our trigger is imperceptible against human inspection. 10. **Explanation for Fig 1** We added one paragraph to explain Fig 1 in Sec 3: Irreversible Backdoor Attacks (IBA) in FL. In the trigger-generating stage, IBA trains a generative trigger model. In the second stage, the attacker joins the FL process and trains a backdoored model with the triggers. The loss function of IBA is the combination of the loss function of the backdoored data and the loss function of the benign data. The objective of the attacker is to maximize the accuracy of the backdoored model on the training set over the threshold (i.e., 0.85). 11. **Results on color images** We included include the GradCAM analysis for CIFAR-10 (in EXP). We can observe that the backdoored images have negligible deviations from the Grad-CAM behaviors on the clean images. On the other hand, backdoored images using patched triggers generate a considerable difference in visualization heat maps (DBA [31]). 12. **using FedProx and FedNova** We will discuss FedProx and FedNova for future extensions of our work in the later version. Evaluating IBA against FedProx and FedNova is an interesting extension of our work. 13, 14. **Anomaly index in Figure 6 and NeuralCleanse in FL** NC reverse-engineers the assumed patch-based trigger and uses the Anomaly Index to identify an anomalous trigger that is smaller than the rest. If the Anomaly Index < 2 for a class, NC tags the model as backdoor with this class as the target label. NC's Index on IBA is less than 2 or the benign model in T-Imagenet. This means that NC is not effective against IBA. [36] Zhou et al. "Deep model poisoning attack on federated learning." [37] Zhang et al. "Neurotoxin: Durable backdoors in federated learning." --- Rebuttal 2: Comment: I want to thank the authors for their efforts in responding to these queries. I respect the hard work put into this paper and trust that these suggestions will only enhance its quality. I am satisfied with most of the responses and am increasing my score. --- Rebuttal Comment 2.1: Comment: We are glad to answer all of the questions. Thank you very much for your insightful comments and kind help. Our work proposes a novel backdoor attack on federated learning that achieves multiple important objectives, including effectiveness, stealthiness again human/ machine inspections, and durability with intensive experiments. This work takes an important step towards understanding the extensive risks of backdoor attacks in FL, urging practitioners to investigate more effective backdoor mitigation methods in the FL domain.
Summary: This paper studies backdoor attacks in federated learning setting. They propose a new backdoor attack method that is based on sample-specific trigger, optimized with constraints on weight norm and weight dimension, to achieve more stealthy, harder to detect and more durable. U-Net is trained to generate sample-specific trigger and the stealthiness of the trigger is controlled by $\epsilon$, which also controls the balance between attack success and trigger stealthiness. To make the attack harder to detect and more durable, the gradients are projected to the ball around the global model weights and optimization is constrained to dimensions that are historically infrequently updated. Experiments on CIFAR10, MNIST and TinyImageNet show that the proposed method is effective against multiple state-of-the-art defense methods. Strengths: - The paper is well organized and the ideas are clearly presented. - Experiments show that the method can defend against multiple state-of-the-art defense methods. Weaknesses: - Even though the proposed method is compared with DBA to show its durability, the paper lacks comparison with other state-of-the-art attack methods. Experiments comparing multiple attack methods against state-of-the-art defense methods on different types of datasets should be conducted to show the effectiveness of the method. - As for the poisoning dimension restriction, the idea of restricting updates to dimensions that are historically seldomly updated is similar to that of Neurotoxin. Why compare with DBA which does nothing to promote the durability of the attack? Why not compare with Neurotoxin? - Generally speaking, the data distribution of clients may affect the performance of attack and defense. Dir(0.5) and Dir(0.01) are used for MNIST, CIFAR10/TinyImagenet. Experiments studying the effect of data distribution could be helpful understand the method better. - Based on Figure 2 and table 2, it seems that the method does not do a good job against Krum and RLR. The attack accuracy of the method keeps dropping when tested against the two defense methods. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See questions and concerns in the previous part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The main limitation is the lack of experiments to support the claim. See details in weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the comments and suggestions from the reviewer. The following is our response to the raised concerns. 1. **Comparison with other state-of-the-art attack methods** As mentioned, our objective is to design a backdoor attack with efficiency, stealthiness, and durability under more practical assumptions. Therefore, regarding evaluating efficiency and stealthiness, we refer to the experimental design in prior works [1, 26, 31] to understand the performance of IBA under different attack schemes (fixed-frequency vs. fixed-pool) and mainstream backdoor defenses. With respect to durability, we selected DBA (a SOTA backdoor attack in FL) and configured it such that two attacks are under the same participation frequency, and we observed the behavior of backdoor accuracy after the adversary is removed. Based on the two main designs above, we want to draw the conclusion that IBA can achieve equivalent results to other attacks such as Edge-case [26], DBA [31] under practical assumptions but is stealthier in terms of visual representation and more durable. To further demonstrate the superior durability of the proposed IBA, we compared it with 3DFed [34], a new SOTA backdoor attack in FL and proved that IBA can bring much better durability and performance. 2. **Comparison with Neurotoxin** We conducted additional experiments to compare the durability of the proposed attacks with Neurotoxin. - IBA + proposed model poisoning (Ours) (1) - DBA + Neurotoxin (2) - Centralized backdoor attacks + Neurotoxin (3) As observed, the combination of DBA and Neurotoxin (2) does not have a good effect on extending the backdoor effect. In specific, the BA reduces gradually after round 600, at which point the adversaries leave the training. Moreover, this combination even brings down the main task's accuracy. We consider the standard centralized backdoor attack [31, 1] combined with Neurotoxin (3) under the same setting as our method (1 attacker participating in each 10 rounds). In the experiment with Neurotoxin, the adversary uses a patched trigger to create the backdoor attack. From the result, our proposed model poisoning method brings better durability compared to the original Neurotoxin, i.e., after round 400 at which the adversary leaves the training, the IBA+Neurotoxin’s BA drops more quickly. For comparison of IBA and Neurotoxin over FL training rounds, please refer to Figure 3 in `Extended Durability Evaluation` section in this [**pdf**](https://files.fm/f/r67m5e7a3) file. 3. **Effect of data distribution** We consider different value of alpha (0.2, 0.5, 0.7) in the Dirichlet distribution to simulate from non-i.i.d to i.i.d distributions for the image datasets. When evaluated under 2 different datasets, the result shows that IBA-BA is stable under various distributions, which exhibits the practicability and robustness of IBA when attacking standard FL. The table below shows the results of IBA-BA under different alpha values in the Dirichlet distribution. **Main Accuracy** (MA) of IBA under different alpha values in the Dirichlet distribution. | Dataset | Alpha = 0.2 | Alpha = 0.5 | Alpha = 0.7 | | --- | --- | --- | --- | | MNIST | 99.01% | 99.98% | 98.99% | | CIFAR-10 | 82.73% | 84.28% | 84.53% | **Backdoor Accuracy** (BA) of IBA under different alpha values in the Dirichlet distribution. | Dataset | Alpha = 0.2 | Alpha = 0.5 | Alpha = 0.7 | | --- | --- | --- | --- | | MNIST | 98.89% | 98.03% | 98.81% | | CIFAR-10 | 83.26% | 85.63% | 88.93% | 4. **Effectiveness against Krum and RLR** While Krum and RLR defend the proposed attack to some extent, i.e., we observe the drop in the BAs of IBA, we argue that Krum and RLR tend to reject previously unseen information from both adversary and honest clients, which results in low MAs (a drop of 3-5%). Therefore, these two defenses raise a concern for their aplicability under the proposed attacks.
Summary: The authors propose a backdoor attack framework (IBA) in Federated Learning for trigger based backdoors that jointly learns a generative model for stealthy visual triggers while also planting the backdoor in the global model. They evaluate their attack on MINST, CIFAR-10, and TinyImageNet and show that it bypasses several known defenses such as KRUM, NC, RFA etc. Strengths: 1. The experimental setup is quite thorough, considering several datasets, settings, and defenses. 2. The attack appears to bypass all the considered defenses with good effectiveness and stealth. 3. The work acknowledges past work well with a good description of existing research. Weaknesses: 1. While the attack is titled "Irreversible Backdoor attack", the longevity of the attack is not well studied. Figure 5 is only for the MNIST dataset and most other experiments consider fixed-frequency attack. I would like to see a better analysis of how long the attack takes to be removed by clean training. 2. I would like to see a comparison to even trigger-less backdoor attacks since the PGD model replacement attack has already been well studied in Bagdasaryan et al. and Wang et al. I would expect this to be much stronger, but it would act as a good baseline. While there are several ablations, I would like to see comparison to more existing baselines. Perhaps even BadNets [Gu et al.]. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In (2), the adversarial perturbation is bounded in the $\lVert \cdot \rVert_{\infty}$ norm. Can this be translated to the 2-norm? Naively, speaking the same attack budget will get scaled down by $\sqrt{d}$, which will reduce the effectiveness of the attack. Have you tried this? 2. It is quite interesting that the optimal choice of scaling factors is $\alpha = \beta = 0.5$. Are the scales of the two losses very similar? 3. How is $\lambda_{\xi}$ chosen? 4. Please exwhat plain acronyms in the table captions. Such as MA, BA. 5. Please explain bold entries in the tables mean. It can be inferred, but would be nice not to have it. 6. Section 4.2 is quite hard to understand especially since accuracies are listed as "ranging" across datasets. This is quite unusual and doesn't really make sense. 7. What is MR? 8 . It is hard to evaluate the performance of IBA from Table 2 without comparing against other attacks. 9. It seems like for some defenses (like KRUM), IBA + PGD outperforms IBA + PGD + MR. Whereas in RLR, it gets worse. Can you explain this behavior? Is one supposed to choose the attack based on a defense? 10. In Figure 3, is the attack at round 0? 11. Can you specify the choice of $\varepsilon$ for the PGD attack? 12. Some citations are duplicated (McMahan et al., Wang et al) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I would strongly recommend the authors include a section listing the limitations of their work (comparison with more baselines, longevity of attack, need for whitebox access etc) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments. Additional experiments are at [**EXP**](https://files.fm/f/jh4hy9pg3). 1. **analysis of longevity** As presented in Fig 5 (main), Fig 9 (sup.), and Tab 4 (sup.), our method can achieve extended longevity compared to DBA for both MNIST and CIFAR-10. We also conducted additional experiments to compare the durability of IBA, 3DFed [1], and Edge-case [2626], and ours still outperformed the others w.r.t durability (results in Fig 2 of EXP). We leverage the fixed-frequency attack for a fair evaluation of the methods under a low participation rate (i.e., attack frequency is 10 rounds). 2. **Compare to more baselines** We provided additional study on the durability of trigger-less backdoor attacks (i.e., the edge-case at Wang et al.) to show that our proposed attack IBA has a longer lifespan (Fig 5 in the provided file). Since patched triggers used in DBA are relevant to the methodology employed in BadNets, we selected one more SOTA, i.e., 3DFed, to compare with ours in terms of durability. IBA outperformed both 3DFed and DBA, which exhibited a gradual accuracy decline beyond round 450. For further durability evaluation over training rounds, please see Fig 4 and 5 in `Extended Durability Evaluation` of EXP. 3. **Perturbation using $\|.\|_{\infty}$ Norm** While considering various norms for bounding the generator noise, such as the $L_2$ norm, it's generally inadvisable for backdoor attacks. This infinity norm guarantees a widespread distribution of the generated trigger across the input image (encompassing all pixels in the trigger); in contrast, the $L_2$ norm can result in localized artifacts within the image (with only select pixels forming the trigger). In simpler terms, employing the $L_2$ norm can make the backdoor attack more susceptible to detection by trigger-synthesis defenses like Neural Cleanse. Thus, the $\|.\|_{\infty}$ norm stands as a preferred choice. We will enhance the clarity of this aspect. 4. **Optimal choice of scaling factors** As discussed in Sec 3.2 (lines 165-169), empirically, if $\alpha$ is significantly higher than $\beta$, the classifier's performance on clean data rapidly converges to the optimal performance of the vanilla classifier; on the other hand, if $\beta$ is significantly higher than $\alpha$, the classifier's performance on backdoor data quickly reaches the optimal value. Thus, to balance the learning processes of both the main and backdoor tasks, we set $\alpha = \beta = 0.5$ in our experiments. 5. **How is $\lambda_{\xi}$ chosen?** $\lambda_{\xi}$ is the decay rate appearing in the formula (4) that controls how fast the reduction by round is. The larger $\lambda_{\xi}$ is, the faster the backdoor images become stealthy and the harder it is for the generative model to learn to perform well (may overfit with the local data). Therefore, our empirical results suggest to fix the value of $\lambda_{\xi}$ to be 0.001 to balance out the two factors mentioned above. Our suggestion is that $(1 - \lambda_{\xi})$ should be set in the range $[1\times (1-\gamma); 3\times (1-\gamma)]$, where $\gamma$ is the decay factor of learning rate, i.e., in our inherited setup, $\gamma$ is set to be 0.998. 6. **Acronyms in the table captions** As provided in Sec 4.2, MA and BA stand for Main Accuracy and Backdoor Accuracy, respectively. The bolded entries in the table indicate the best MA/BA achieved by each method for a given dataset. MR stands for Model Replacement [1]. We will make the adjustments accordingly. 7. **Unusual writing (`ranging` word)** Here, we aim to show that the MAs of the poisoned models are similar to those of the corresponding clean models while their BAs are much higher under our attack. 8. **evaluate the IBA performance, Tab 2, with other attacks** We focus on providing a comprehensive empirical analysis of the proposed attack, including the properties of the designed trigger function, the attack’s stealthiness and durability, (main concerns of related federated backdoor-research works). Therefore, regarding evaluating efficiency and stealthiness, we refer to the experimental design in prior works [31, 26, 1] to understand the performance of IBA under different attack schemes (fixed-frequency vs. fixed-pool) and mainstream backdoor defenses. The experiment shows that IBA can obtain equivalent BAs as other backdoor attacks, i.e., >90% for 3 datasets, while ensuring its stealthiness in terms of visualization and defense bypassing and extended durability. 9. **Behavior of IBA+PGD/ IBA+PGD+MR** In most cases, the combination of IBA + PGD/ IBA + PGD + MR helps improve the backdoor accuracy from 10% to 20%. However, Krum and RLR are the most challenging to circumvent. For Krum, MR seems not to work well under the MNIST dataset since the poisoned model is scaled before being submitted, and Krum chooses the model with the smallest Euclidean distance to its neighbors as the new global model. For RLR, the behavior of attacks is difficult to predict. At some suspicious dimensions of the whole model parameter, the learning rate will be reversed. If the poisoned model is scaled (by MR) and then reversed, the global model will move farther from the optimal point of the backdoor task. Nevertheless, IBA can still be effective against Krum and RLR. 10. **Attack at round 0 in Fig 3** Fig 3's attack is at round 0. Here, with fixed-frequency attacks of 10, the attacker participates in round 0, 10, 20... 11. **Choice of $\epsilon$ for PGD** The norm-bound for PGD attack should be selected based on the observation of the $L_2$-norm variation of the global model by round. Following the work [1] and [26], the norm bound is set to be 2 for the CIFAR-10/Tiny-Imagenet dataset and 1.5 for the MNIST dataset. We will add this to the updated version. 12. **Duplicate citations** We will revise this section. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for the detailed response. I am satisfied with most of the responses. I had indeed overlooked Table 4 in the sup which studies the durability of IBA. However, I noticed something strange in Table 4. The Main accuracy (which I assume is listed under (%)) seems to continue dropping after the malicious clients are removed from the pool. Can you explain this behavior? --- Reply to Comment 1.1.1: Title: We sincerely thank the reviewer for the valuable comments and the quick response! Comment: We are glad to answer all of the previous comments/questions. Please see our responses to the additional question below. **Q: The Main accuracy (which I assume is listed under (%)) seems to continue dropping after the malicious clients are removed from the pool.** The column (with heading %) next to BA in Table 4 (supp file) denotes the Relative Backdoor Accuracy w.r.t the Backdoor Accuracy obtained at round 200 when the attackers (malicious clients) are removed from training. In other words, it is the measure of durability or how much the backdoor behavior persists after these malicious clients are removed completely from the training process. For example, for MNIST, at 50 rounds after the malicious clients are removed, IBA (with BA 99.84%) retains 99.90% of the original Backdoor Accuracy (99.94%); at 100 rounds later, 99.57% retention; at 250 rounds later, 99.14% retention... This shows the significant durability of IBA's attack, compared to DBA with only 38.76% retention at 250 rounds post-removal. We will revise the supplementary document accordingly to make this discussion clearer.
Summary: **Key Contribution:** This work proposes a two-staged model poisoning attack which works even with only the participation of a small number of malicious clients. Moreover, the proposed method is effective and robust against existing defenses. The attack is evaluated on a variety of existing defenses on CIFAR-10, MNIST, and Tiny-Imagenet. Strengths: **Novelty and Significance** The paper’s two-stage approach is unique and powerful. Learning a generative model for adversarial noise is additionally alarming as it is difficult to detect and defend against. **Clarity:** The paper is well-written and clear - the mathematical formulations are simple and easy to follow, with intuitive results. Moreover, the proposed augmentations to the base attack which help evade existing defenses are methodical and clearly motivated. Weaknesses: **Limited Evaluation:** The paper could be stronger by presenting more results under different numbers of clients. However, despite this weakness, I am overall satisfied with the quality and comprehensiveness of the evaluation. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Have you considered different choices of trigger generation models? - What was the distribution of data considered across the clients during evaluation? Have you considered evaluating with non-iid data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately discussed limitations and potential social impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your our time and effort reviewing the paper. We appreciate the review and comments on the paper. We would like to address the concerns as follows. 1. **Results under different number of clients** The standard settings are inherited from other works [1, 31], which are 10 clients. In this work, we consider the case when the number of clients increases and decreases. The performance of IBA is consistent under different number of participating clients (5 and 20 clients). For performance of IBA (stand-alone) under different numbers of participating clients K over 500 rounds, please refer to Figure 1 from in this [**pdf**](https://files.fm/f/r67m5e7a3) file. 2. **Choices of trigger generation models** In general, the generator function can be modeled as an autoencoder or the more complex U-Net architecture [18]. However, we observe a negligible performance difference between using an autoencoder and U-Net. We conjecture that since learning to generate trigger noises is a much simpler task than learning to generate images, using the simpler autoencoder is sufficient. Furthermore, training and performing inference with the autoencoder have lower computational overhead. For these reasons, we employ the autoencoder in all experiments in our paper. We will add this discussion to a later version. 3. **Data distribution across the clients during the evaluation** FL often presumes non-i.i.d. data distribution across parties. Here, we use a Dirichlet distribution (Minka, 2000) with different hyperparameter $\alpha$ to generate different data distribution following the setups in (Bagdasaryan et al., 2018). Specifically, we follow an established evaluation protocol in previous works [26, 31] and simulate heterogeneous data partitioning by sampling $p_k \sim \text{Dir}_K(0.5)/\text{Dir}_K(0.01)$ for MNIST, CIFAR-10/Tiny-Imagenet and allocating a proportion of each class to participating clients. [1] Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2018). How to backdoor federated learning. arXiv preprint arXiv:1807.00459. [18] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing. [26] Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning. Advances in Neural Information Processing Systems, 33:16070–16084, 2020. [31] Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. Dba: Distributed backdoor attacks against federated learning. In International conference on learning representations, 2020.
Rebuttal 1: Rebuttal: Thank you the reviewers for the initial comments and questions. We provide the additional experimental evaluations to address the comments of the authors in this [**EXP**](https://files.fm/f/r67m5e7a3) file, including the following experiments: - Performance when varying the number of Participating Clients - Extended Longevity/Durability of IBA (including 3DFed). - Comparisons with Neurotoxin, Edge-case. - Grad-cam on colored images (CIFAR10). In the separate responses to each reviewer, we will use this EXP file as a main reference for any additional experiments.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Max-Sliced Mutual Information
Accept (poster)
Summary: The authors propose max-sliced mutual information (mSMI), which inherits important properties of mutual information. They show that the projection in mSMI is reduced to that in CCA for jointly Gaussian. They also provide a method to estimate mSMI using a neural network, which is computationally more efficient than an existing method for estimating average-SMI. They derive an error bound for the estimation of mSMI. Strengths: The paper includes fundamental properties of mSMI, connections with existing studies, and a practical method to estimate mSMI. The results provided in this paper are sufficient to understand mSMI. Weaknesses: Although the paper is well-written for the most part, some parts should be explained clearly, or need more detailed explanations. For example, what information can be extracted from mSMI if the data is not Gaussian (in Remark 4), and how we can extend the ideas in 4.2 to mode general neural networks. See the sections on questions and limitations for more details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Remark 4, the authors insist that if the data is not Gaussian, the projections in mSMI do not coincide with those in CCA. Do you have any simple examples of how the projections in mSMI and those in CCA are different? Minor comments - The definition of I is missing in Eq. (1). The definition is in line 110, but it should be defined here. - In Definition 2, G and H should be subsets of $\{g:\mathbb{R}^{d_x}\to\mathbb{R}^k\}$ and $\{h:\mathbb{R}^{d_y}\to\mathbb{R}^k\}$ ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Theorem 1 is for the shallow network with the ReLU activation function. Although the authors insist that the ideas can be extended to other nonlinearities and deep architectures, how we can extend the theoretical results like Theorem 1 to the deep case and other activation functions is not clearly explained. Are there any difficulties? If there are, do you have any ideas to overcome the difficulties? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for feedback and comments, which we address below: **1. Inequivalence between mSMI and CCA in the non-Gaussian case:** This is an interesting question. We first note that the equivalence between mSMI and CCA in the Gaussian case hinges on the sufficiency of the cross-covariance matrix to fully characterize the dependence between the projections of $X$ and $Y$. As non-Gaussian mutual information (MI) cannot be characterized merely by second order statistics, we do not expect this equivalence to hold in general. Capturing this inequivalence analytically seems challenging as it requires a closed-form formula for the MI between the projected variables, which can further be analytically optimized over the corresponding Stiefel manifolds. We are currently exploring such examples and hope to add one to the final version of the paper. In the meantime, we provide numerical evidence for the above. Consider a 3-dimensional random variable X, whose components are independent and uniformly distribution on $[-1,1]$, i.e., $X=(X_1\ X_2\ X_3)^\intercal$ with $X_i\sim\mathsf{Unif}[-1,1]$. Define $Y=(Y_1\ Y_2\ Y_3)^\intercal$ by $Y_i=X_i^2-\mathbb{E}[X^2]+U_i$, where $U_i\sim\mathsf{Unif}[-1,1]$ for $i=1,2,3$. Using a simple CCA solver and numerically solving for mSMI, we obtain different maximizing projection directions in $\mathsf{S}t_{1,3}$. Specifically, a python simulation yields $\theta_{CCA}=(0.151, -0.449, 0.88)$, $\phi_{CCA} = (0.129, 0.106, 0.985)$, $\theta_{\bar{\mathsf{SI}}}=(-0.063, -0.99, -0.122)$, $\phi_{\bar{\mathsf{SI}}} = (-0.062, 0.998, 0.0195)$ which are significantly different. While we hope to have an analytical example to include in the revision, we are happy to include the numerical example in the text (perhaps with some supporting graphics) as a contingency plan. **2. Generalizing Theorem 1 to other nonlinearities and deep networks:** Thank you for this excellent question. We divide our response into two parts: addressing other activation functions first and then discussing deep neural estimation (NE). A list of references that are used throughout this answer that do not appear in the paper is provided below. (i) Non-ReLU activations: Our NE bound from Theorem 1 (as well as those in the average-sliced MI papers) are based on the theory developed in [39]. As highlighted therein, their bounds extend to nonlinearities beyond ReLU activation. Specifically, their theory accounts for any sigmoidal bounded activation with $\lim_{z\to-\infty}\sigma(z)=0$ and $\lim_{z\to\infty}\sigma(z)=1$. To adapt our NE error bound to sigmoid activations, one should use the approximation error bounds from [A], instead of the currently used ones from [B]. This change will require a small modification of the class of distributions $\mathcal{P}_{\mathsf{KL}}$ that our theory accounts for (see definition in Section 4.2). In particular, one would have to consider an extension $r\in\mathcal{C}_b^{k+2}$, rather than $r\in\mathcal{C}_b^{k+3}$. However, as noted in [39], to achieve the $O(\ell^{-1/2})$ approximation error with sigmoid activations one must scale the hidden layer parameters as $\ell^{1/2}\log \ell$, where $\ell$ is the number of neurons. With ReLU activations, on the other hand, the network has bounded parameters independent of $\ell$. For that reason, we opted to use ReLU networks in our statement. We plan to include discussion to the effect of the above in a remark following Theorem 1. [A] Barron, Andrew R. “Universal approximation bounds for superpositions of a sigmoidal function.” IEEE Transactions on Information theory 39.3 (1993): 930-945. ‏ [B] Klusowski, Jason M., and Andrew R. Barron. “Approximation by combinations of ReLU and squared ReLU ridge functions with $\ell^ 1$ and $\ell^ 0$ controls.” IEEE Transactions on Information Theory 64.12 (2018): 7649-7656 (ii) Concerning the extension to deep NEs, this is an interesting question that is on our research agenda going forward, as noted in the second paragraph of Section 6. The NE theory for $f$-divergences from [39], on which our Theorem 1 relies, decomposes the error analysis into two parts: function approximation and statistical estimation. The former is controlled using approximation error bounds for sparse shallow nets, while the latter requires control over the covering number of the corresponding neural network class. Sharp bounds for these aspects are available in the literature (cf. Theorems 8 and 11 from [39]), which results in the optimal $n^{-1/2}$ convergence rate for shallow NEs, as established in [39] and exploited in our work. Similar approximation error and covering number bounds are available for deep networks; cf., e.g., [C,D,E]. However, we believe that these bounds are not sharp since the best NE error rate that can be obtained from them is $O(n^{-1/4})$. The advantage of deeper architectures is that approximation is possible under relaxed smoothness assumptions on the distributions (compared to the shallow case), but we expect that the same $n^{-1/2}$ parametric rate would be achievable. This suggests that more work is needed on the fundamental components of the deep NE analysis to obtain a satisfactory theory. While this is a fascinating research avenue, it falls outside the scope of the current paper and is left for future work. Nevertheless, we will expand our discussion on this topic in Section 6 and provide more details as above. [C] Schmidt-Hieber, Johannes. “Nonparametric regression using deep neural networks with ReLU activation function.” Annals of Statistics 48.4 (2020): 1916-1921. [D] Bresler, Guy, and Dheeraj Nagaraj. “Sharp representation theorems for relu networks with precise dependence on depth.” NeurIPS 33 (2020): 10697-10706. [E] Shen, Zuowei. "Deep Network Approximation Characterized by Number of Neurons." Communications in Computational Physics 28.5 (2020): 1768-1811. **3. Minor comments:** Thank you. These will be corrected in the revision. --- Rebuttal Comment 1.1: Comment: I have read the response. I think my questions and comments are properly addressrd. Thank you for your response. --- Reply to Comment 1.1.1: Comment: Thank you for your effort and the helpful review.
Summary: The paper proposes Max-Sliced Mutual Information (mSMI) which equals the maximal mutual information between low-dimensional projections of the high-dimensional variables. The mSMI can capture intricate dependencies in the data while being amenable to fast computation and scalable estimation from samples. In addition, the paper proposes multivariate conditional mSMI, and max sliced entropy which are extensions of mSMI. The paper discusses the structural properties of mSMI including bounding on different values of sub-space dimension and the original mutual information, Identification of independence, KL divergence representation, Sub-chain rule, and Tensorization. Moreover, the paper discusses how Gaussian mSMI is related to canonical correlation analysis (CCA) and how Max-sliced entropy is related to PCA. In addition, generalized mSMI with two general classes of functions is also discussed. The neural estimation of mSMI and its error are presented in Section 4. Finally, the paper presents experiments that demonstrate the utility of mSMI for several synthetic and real-world tasks, encompassing independence testing, multi-view representation learning, and algorithmic fairness. Strengths: * The paper is detailed and very well-written. * Max-sliced mutual information is a natural extension to SMI. * Connection between mSMI and conventional methods such as CCA and PCA. * Neural estimation of sMI has a better error than SMI since no Monte Carlo estimation is needed. * mSMI can be implemented efficiently with Neural Estimation by merging the linear projection to the neural network. * Experiments on independence testing show that mSMI is both better and faster than SMI. Weaknesses: * mSMI is quite incremental based on the existence of SMI, k-SMI, Max-sliced Wasserstein distance, and Max-K sliced Wasserstein distance. * There is no comparison between mSMI and SMI (K-SMI) in Multi-View Representation and Learning Fair Representation. This is quite questionable since mSMI is an extension of SMI (K-MSI). I believe a comparison in both computation and performance is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the paper include an application in the previous SMI papers e.g., InfoGAN? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for feedback and comments, which we address below: **1. max-SMI is incremental with respect to SMI and sliced Wasserstein measures:** Thank you for bringing this point up. We divide our response into two parts: (i) comparison with sliced Wasserstein distances, and (ii) comparison with other sliced information measures. (i) While the original SMI, proposed in [26], was inspired by slicing techniques for Wasserstein distances (as mentioned in the introduction of that work), we note that these objects quantify different things. Sliced Wasserstein distances measure discrepancy between probability distributions, while mutual information (MI) and its sliced variants quantify dependence between random variables. Additionally, as Wasserstein distances are rooted in optimal transport theory, the corresponding notion of discrepancy is geometric in nature and adapts to the structure of the metric space in which the data resides. Standard/sliced MI, on the other hand, is induced by the KL divergence, which is an entropy-based quantity that only depends on the log-likelihood of the considered distributions and overlooks the underlying geometry. For these reasons, we view SMI measures and sliced Wasserstein distances as not directly comparable/related, despite the said inspiration. (ii) Concerning the comparison to SMI or $k$-SMI, we believe that max-SMI (mSMI) addresses core limitations of the former which are crucial for machine learning applications. Specifically, SMI and $k$-SMI were proposed to circumvent the statistical and computational difficulties associated with classical MI in high-dimensional settings, while inheriting important structural properties from it. However, the average-sliced methods still suffer from a burdensome Monte Carlo (MC) step that was needed for their estimation/computation and lacked interpretability of the notion of dependence being quantified. The mSMI addresses both these issues as follows. First, by replacing the average over slices with a maximum, mSMI rids the MC step for estimation, replacing it with a simple optimization that is readily absorbed into the neural estimation paradigm with negligible cost. Thus, while estimation of average-SMI or $k$-SMI requires computing $m$ different estimators ($m$ being the number of MC samples), one for each slice, mSMI entails only a single estimation problem. As discussed in Section 5.1 and illustrated in Figure 1, this results in significant speedups in runtime with no loss of performance (see, e.g., Figure 2 for independence testing). Second, we argue that mSMI also enjoys better interpretability than its average-sliced counterparts due to the equivalence, under the Gaussian setting, to the well-understood notion of CCA. Indeed, for Gaussian data, mSMI and CCA coincide, and both the optimal projection directions and the mSMI value adhere to simple closed-form solutions (see Proposition 2). Since CCA is a classical idea that has been thoroughly studied over the years, our understanding thereof is inherited by Gaussian mSMI, endowing it with a clear interpretation. By the same token, the relation between max-sliced entropy and PCA, which is discussed in Supplement A.3, serves a such an interpretability role. To the best of our knowledge, no such crisp connections are available for average-SMI variants. In summary, while SMI and $k$-SMI were important steps towards a tractable and efficiently computable measure of information, mSMI provides another significant improvement over those approaches in several aspects. We therefore do not view the contribution of our work as incremental and believe that mSMI will serve as a useful tool for high-dimensional machine learning applications. To further clarify the above points, we will edit the Section 1.1 of the introduction in the final version to better motivate mSMI and contrast it with existing average-sliced methods. We will also add a remark before Section 3.1, where a discussion to the effect of the above will be provided. **2. Comparison between mSMI and SMI ($k$-SMI) in representation learning tasks:** This is a great question. The goal of these experiments was to use the considered measures (SLICE and mSMI for fairness; CCA and mSMI for multi-view representation) to extract the best latent features for a corresponding downstream task. While mSMI provides such a latent feature (namely, the optimal projection direction), average-sliced information measures do not. Indeed, SMI and $k$-SMI average multiple MI terms between the projected variables, and it is unclear how to extract a single feature to represent the data from them. For this reason, SMI and $k$-SMI were not used for this experiment. One could alternatively attempt to maximize SMI or $k$-SMI as an objective for feature extraction. However, this would reduce back to mSMI, as discussed in Remark 3 of our paper and proven in Proposition 4 of [26]. This further justifies why average-sliced variants were not considered for these experiments. We will add text to Sections 5.3 and 5.4 to clarify the above. **3. Including an InfoGAN application:** Thank you for this suggestion. Our choice of experiments was driven by two main considerations: (i) present instances where the mSMI enables improving upon existing methods (e.g., neural estimation and independence testing, for which mSMI indeed performs better/faster), and (ii) diversify the experiment portfolio beyond the examples for which sliced information measures were previously used (e.g., multi-view representation and fairness, which were not considered before). That said, we recognize the potential of mSMI for constructing a 'max-sliced InfoGAN' and are happy to explore this application in order to include such results in the camera-ready version. We will also compare the performance to that of the average-sliced SMI InfoGAN from [27]. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for your response. I appreciate the effort of the authors in adding new experiments and answering my question. Overall, I believe all my questions are addressed. Hence, I raised my score to 5. Best regards, --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We were wondering if there are any other additions to the text that the reviewer would like to see that could further improve their assessment of the work?
Summary: The paper introduces an adaptation of sliced mutual information that focuses on the maximal mutual information between linear projections of low dimensionality of random variables. This measure (mSMI) has desireable properties and is approachable by neural estimation. The authors show that mSMI can be approximated for a certain class of continuous random variables more efficiently than (sliced) mutual information. In some illuminating experiments potential use cases are illustrated and suggest good practical performance also in such use cases. Strengths: The paper is well written and presents a new measure of dependence from all essential perspectives: Useful theoretical properties, approximation is theoretically possible for a large class of random variables, and practical implementation is possible. The paper is not overloaded in content and focuses on the topic at hand providing all necessary information without giving unnecessary details or too many experiments. Weaknesses: I don't see any constructive and actionable insights on how the work could improve towards its stated goals. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How can the results for fairness aware methods in Table 2 be better than fairness agnostic? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors are honest in the limitations: The experiments show that for some cases sliced mutual information might work better than mSMI. Further research on the best choice of the dimension k to use might be interesting but are clearly beyond the scope of this work and would only clutter the presentation. A proof for the non-linear method is not yet availabvle but most likely also something for future work and a separate manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for feedback and comments, which we address below: **“How can the results for fairness aware methods in Table 2 be better than fairness agnostic?”** **Answer:** This is an excellent question. Note that the reported $\rho_{\mathsf{HGR}}$ coefficients are evaluated on test data that was not used in training. As a result, the reported $\rho_{\mathsf{HGR}}$ values depend on the ability of the learned representations to generalize to unseen examples. If it happens that the protected attributes $A$ in the problem are spurious features that are not causally predictive of the outcome $Y$, then a representation $Z$ that overlooks $A$ may generalize better for predicting $Y$. This can explain the small improvement in $\rho_{\mathsf{HGR}}(Z,Y)$ observed when using fairness-aware methods. That said, the above is a conjecture and it strongly relies on the intrinsic structure of the problem. It may be the case that this trend does not persist for other datasets/tasks. --- Rebuttal Comment 1.1: Comment: Thank you for this explanation, I don't have anything more to add.
Summary: This paper defines a novel measure of independence called Max-Sliced Mutual Information (mSMI) which provides a non-linear generalization of Canonical Correlation Analysis. This measure can also be viewed as a variant of Mutual Information with a better tractability in high dimensions. In fact, mSMI is closely related to average-sliced mutual information (aSMI) introduced in [26,27]. Whereas aSMI is computed by averaging an objective involving mutual information over a product of Stiefel manifold $St(k,d_x) \times St(k,d_y)$, mSMI is actually obtained by maximizing this objective over the same domain. This work also introduces a generalized mSMI where the optimization is performed (loosely speaking) on a class of non-linear functions. By leveraging the Donsker-Varadhan variational form for the mutual information, the authors show that mSMI can be estimated thanks to a neural estimator for which Theorem 1 provides convergence rates. From the numerical perspective, three applications are studied: independence testing, multi-view representation learning and fair representation learning. For independence testing, mSMI seems to better perform than aSMI when $k$ is not too small. The time complexity of aSMI is also reported to be larger than the time complexity of mSMI. In the comparisons with a few baselines in the case of multi-view representation learning or fair representation learning, mSMI is often the best method in terms of performance. Strengths: This paper is written clearly and its results are significant. The theoretical contributions are interesting and the proofs given in supplementary material are easy to read. The construction of mSMI nicely arises as a variant of aSMI [26,27] and the mSMI neural estimator has a clear interpretation. I find the numerical comparisons in section 5 convincing. Weaknesses: I do not see serious methodological weaknesses. Here are some minor remarks. - The rule for emphasizing entries with bold in Table 2 of section 5.4 is unclear to me. In particular, $\rho_{HGR}$ equals $0.958$ for $k=2$ and is not displayed in bold whereas the entry corresponding to $k=6$ is emphasized although it takes a smaller value ($0.957$). - In the same section, it is unclear how $k$ has to be selected by the user since mSMI does not outperform SLICE for all the values of $k$. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Here is a question about Theorem 1. Why is $\|\mathcal{X}\times\mathcal{Y}\|$ defined at the end of this statement? This quantity does not appear explicitly in the theorem. This is a bit confusing. - In Table 2, the values of $k$ range from $1$ to $7$ whereas they range from $3$ to $7$ in Table 5 of the supplementary material (Adult data set). What happens for small $k$ in the case of the Adult data set ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I do not foresee a negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for feedback and comments, which we address below: **1. Bold display of both $\rho_{\mathsf{HGR}}$ values:** Thank you for this observation. The quantity that measures fairness is $\rho_{\mathsf{HGR}}(Z,A)$ and we aim to minimize it while maintaining a high value of $\rho_{\mathsf{HGR}}(Z,Y)$, so that the ability to predict $Y$ from $Z$ is not compromised. As seen in the table, the most fair result (i.e., when $\rho_{\mathsf{HGR}}(Z,A)$ is smallest) is achieved for $k=6$ and our intention was to highlight that. However, we acknowledge that marking both values of $\rho_{\mathsf{HGR}}(Z,A)$ and $\rho_{\mathsf{HGR}}(Z,Y)$ in boldface may be confusing, as the corresponding $\rho_{\mathsf{HGR}}(Z,Y)$ was not maximal. For that reason, in the revision we will remove the boldface from $\rho_{\mathsf{HGR}}(Z,Y)$ and mark only $\rho_{\mathsf{HGR}}(Z,A)$. **2. About the choice of $k$:** The choice of $k$ is indeed an interesting and important question—one that we plan to develop a theory for in the future. At the moment, we adopt an empirical approach that sweeps over a range of $k$ values and treats it as a hyperparameter of the task. While an in-depth investigation of this point is left for future work, we briefly discuss an interesting tradeoff concerning the choice of $k$: between sample complexity and capturing as much information as possible in the mSMI. On one hand, note that the $k$-dimensional mSMI increases with $k$, and is uniformly (in $k$) upper bounded by the classical mutual information between the variables in the ambient space. This fact encourages the choice of a larger $k$. On the other hand, recall from our Theorem 1 that the sample complexity of mSMI estimation grows rapidly with $k$. Hence, as $k$ increases there should be a tradeoff between the returns of increasing the population mSMI, and the growing sample complexity. A particularly interesting setting is when the supports of the distributions lie in $d'$-dimensional subspaces. In this case, $k=d'$ is sufficient to capture the full classical mutual information, and increasing $k$ further only serves to degrade sample complexity without further gain. Extrapolating from this point, we conjecture that the optimal value of $k$ is related to the intrinsic dimension of the data distribution, even when it is not strictly supported on a low-dimensional subset. We hope to prove this conjecture in the future and will discuss this point in the Conclusions section of the revision. **3. About the definition of $\lVert\mathcal{X}\times\mathcal{Y}\lVert$ in Theorem 1:** We acknowledge that the original statement was somewhat confusing. Our intention was to list the parameters on which the constant $C$ depends, with $\lVert\mathcal{X}\times\mathcal{Y}\rVert$ being one of them. To save space, we had defined this quantity at the same spot where it was listed. To avoid confusion, in the revision we will modify the wording after the bound to: “... where the constant $C$ depends on $M$, $b$, $k$, and the radius of the ambient space, which is given by $\lVert\mathcal{X}\times\mathcal{Y}\rVert\coloneqq \sup_{(x,y)\in \mathcal{X}\times\mathcal{Y}}\lVert(x,y)\rVert$.” We hope that this resolves the issue. **4. About omitting $k=$ 1,2 values in Table 5:** In Table 5 of the supplementary material we have omitted the results for k=1,2 since we thought it was sufficient to run the experiment around the clearly best performing values of $k=$ 4 (or 5). Nevertheless, we agree with the reviewer that it is better to include the $k=$ 1,2 values for consistency and will do that in the revision. The $\rho_{\mathsf{HGR}}(Z,A)$ for $k=1,2$ is $0.43$, $0.393$, respectively. --- Rebuttal Comment 1.1: Comment: Thank you. This answers my questions.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Semi-Supervised Domain Generalization with Known and Unknown Classes
Accept (spotlight)
Summary: This paper introduces a new methodology for Semi-Supervised Domain Generalization (SSDG). This paper proposes the Class-Wise Adaptive Exploration and Exploitation (CWAEE) method, which contains one-vs-rest classifiers, class-wise adaptive thresholds, and consistency regularization based on Fourier Transformation. This algorithm shows the improvement in their SSDG settings. Strengths: Pros. - This paper is well-written, motivated, and easy to follow. - This paper proposes several methods in which class-wise adaptive thresholds and consistency regularization based on Fourier Transformation are very interesting works. (If the overview image for consistency regularization in the appendix is added in the main paper, it would be much easy to understand the concept. It is very novel and interesting. ) - This paper shows extensive experiments and shows the effectiveness of this method. Weaknesses: Cons. - (Major) This paper didn’t follow the previous training pipeline for the baseline. In FixMatch, it trains the model 2^20 iterations. For Cifar10/Cifar100, it is about 10k epochs. With only 80 epochs, I would say it is not the original other baseline numbers. - (Major) This paper uses Imagenet pre-trained model. It means that unlabeled data could be trained in the pre-training phase. - (Major) This paper evaluates their algorithm in only their setting. I think it can be evaluated in the same OOD setting in OpenMatch. (known/unknown in cifar10/cifar100 or Imagenet) If this paper could show the improvement in this setting, it would be more solid results. - (Minor) The one-vs-rest classifier is very similar to One-vs-All Outlier Detector in OpenMatch. (Novelty) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: (Major) This paper didn’t follow the previous training pipeline for the baseline. In FixMatch, it trains the model 2\^20 iterations. For Cifar10/Cifar100, it is about 10k epochs. With only 80 epochs, I would say it is not the original other baseline numbers. A: Our paper focuses on the Semi-Supervised Domain Generalization (SSDG), thus we follow the training setting of the seminal work [11] of SSDG for a fair comparison. [11] Zhou et al, Semi-Supervised Domain Generalization with Stochastic StyleMatch, NeurIPS 2021 Workshop (extension version published in IJCV'23). W2: (Major) This paper uses Imagenet pre-trained model. It means that unlabeled data could be trained in the pre-training phase. A: We follow the training setting of the seminal work [11] to use the ImageNet pre-trained model. The results of the proposed method trained from scratch on Officehome with 25:20:20 (*known classes*, *seen unknown classes* and *unseen unknown classes*) are summarized in Table 2. It can be found that our method outperforms other compared methods when the model is trained from scratch. Table 2: Leave-one-domain-out average results of *known classes* accuracy and *unknown classes* AUROC on OfficeHome. | | Accuracy | AUROC | |--------------|:-----------------:|:-----------------:| | DeepAll | 21.75 | 54.07 | | UDG | 19.91 | 55.95 | | DAML | 29.35 | 55.55 | | FixMatch | 40.68 | 57.25 | | OpenMatch | 28.76 | 56.22 | | StyleMatch | 42.57 | 56.72 | | CWAEE (ours) | **43.66** | **61.29** | W3: (Major) This paper evaluates their algorithm in only their setting. I think it can be evaluated in the same OOD setting in OpenMatch. (known/unknown in cifar10/cifar100 or Imagenet) If this paper could show the improvement in this setting, it would be more solid results. A: We focus on a more realistic but harder scenario where the learned model is not only required to classify *known classes* but also to recognize *unknown classes* on an *unseen* target domain. In order to carefully exploit unlabeled data from multiple source domains, we assign pseudo-labels to unlabeled samples whose scores are higher than the *known classes* thresholds $\delta^{1:|C^l|}\_{knw}$ or lower than the *unknown classes* thresholds $\delta^{1:|\mathcal{C}^l|}\_{unk}$. The unlabeled samples whose scores are between $\delta^{1:|\mathcal{C}^l|}\_{knw}$ and $\delta^{1:|\mathcal{C}^l|}\_{unk}$ are only utilized through consistency regularization loss $\mathcal{L}\_{con}^u$. In a degenerate situation where the model is trained and tested on the same domain, like the OOD setting in OpenMatch, OpenMatch applies entropy minimization on **all** unlabeled data which may be slightly more effective since it is an easier scenario to separate and utilize **all** unlabeled samples of *known* and *unknown classes*. However, in our setting, OpenMatch is less effective than ours as the experimental results shown in Table 1 in our paper. W4: (Minor) The one-vs-rest classifier is very similar to One-vs-All Outlier Detector in OpenMatch. (Novelty) A: Semi-Supervised Domain Generalization aims to use unlabeled data to improve the generalization of the model on *unseen* domains. Unfortunately, in a more realistic scenario, the unlabeled data usually contains *unknown classes* which may significantly degrade the generalization of the model, and the existing methods are not applicable to this scenario. The main contribution of our paper is that we propose a feasible solution with popularly used basic components, e.g. one-vs-all classifiers [4], class-wise adaptive thresholds [5] and consistency regularization [6], for this realistic scenario. The experiments conducted on realistic datasets verify the effectiveness and superiority of our method. [4] Rifkin et al., In Defense of One-vs-all Classification, JMLR'04. [5] Gui et al., Towards Understanding Deep Learning from Noisy Labels with Small-loss Criterion, IJCAI'21. [6] Sajjadi et al., Regularization with Stochastic Transformations and Perturbations for Deep Semi-supervised Learning, NIPS'16. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Many of my concerns are addressed. I am still a bit not convinced about training setting, but I appreciate the rebuttal. I raised my score. --- Reply to Comment 1.1.1: Comment: Thanks for your comments and kind suggestions. We will provide more discussions about the widely used training setting on semi-supervised domain generalization in the future revision. --- Rebuttal 2: Comment: We sincerely thank you for your time and efforts in reviewing this paper and hope that our response has satisfactorily addressed your concerns. We are looking forward to discussing with you during the discussion period. We believe that your insights and suggestions can further improve this paper.
Summary: This paper focuses on the realistic scenario and proposes a semi-supervised domain generalization method. The method first explores unlabeled data by detecting known and unknown classes, and then exploits the data by adopting consistency regularization based on Fourier Transformation. The experiments show the effectiveness and superiority of the proposed method. Strengths: This paper is well-written and organized. It is easy to follow. This paper focuses on the realistic scenario where the known classes are mixed with unknown classes in the data, and proposes a method which outperforms the state-of-the-art semi-supervised domain generalization methods. This paper uses one-vs-rest classifiers and class-wise adaptive thresholds, which is helpful to detecting unknown classes. Weaknesses: The method needs to train one-vs-rest classifiers. When the number of classes is large, the computation cost is high. The calibration needs the validation data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The method uses the class-wise adaptive thresholds to detect unknown classes. Why do these thresholds work intuitively? There are hyper-parameters in Eq(9), is the performance sensitive to the hyper-parameters? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The method needs to train one-vs-rest classifiers. When the number of classes is large, the computation cost is high. A: When training $|\mathcal{C}^l|$ one-vs-rest classifiers for $|\mathcal{C}^l|$ *known classes*, we keep the architecture and parameters of the network unchanged and only replace the softmax function after $|\mathcal{C}^l|$-way linear classifier $h_\omega$ with $|\mathcal{C}^l|$ sigmoid function. These $|\mathcal{C}^l|$ one-vs-rest classifiers share the same backbone network $g_\theta$ and have their own classification heads $h_\omega^c$. Thus, the computation cost of $|\mathcal{C}^l|$ one-vs-rest classifiers in our method is actually equal to that of the softmax classifier. W2: The calibration needs the validation data. A: Score calibration, which uses the validation dataset, is a popularly used method for *unknown class* detection, *e.g.* [7, 8, 9, 10]. [7] Guo et al., On Calibration of Modern Neural Networks, ICML'17. [8] Liang et al, Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks, ICLR'18. [9] Minderer et al, Revisiting the Calibration of Modern Neural Networks, NeurIPS'21. [10] Wang et al, Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence, NeurIPS'21. Q1: The method uses the class-wise adaptive thresholds to detect unknown classes. Why do these thresholds work intuitively? A: We train one-vs-rest classifiers for each *known class*, and the output scores of each one-vs-rest classifier indicate whether the sample belongs to the class or not. The samples of *unknown classes* do not belong to *known classes*, so the scores of *unknown classes* are usually lower than that of *known classes* on the corresponding one-vs-rest classifier. It is required to set proper thresholds for one-vs-rest classifiers to detect *known class* and *unknown classes*. However, due to the differences between the classes, it is difficult to manually choose the optimal threshold for each classifier. The score distributions of *known class* and *unknown classes* are usually different, as shown in Figure 4 in our paper. Thus, we use a two-component beta mixture model to fit the score distributions and set the thresholds with the means of two beta distribution components on each one-vs-rest classifier. The experiments conducted on realistic datasets also verify the effectiveness of our method. Q2: There are hyper-parameters in Eq(9), is the performance sensitive to the hyper-parameters? A: The performance of our method is not sensitive to the hyper-parameters. The results of the ablation study of hyper-parameters are in Appendix C of the submitted supplementary files. --- Rebuttal Comment 1.1: Comment: I have read the author's rebuttal and other reviews. The rebuttal has addressed my concerns. Therefore, I would like to keep my incline to accept the paper.
Summary: Paper considers the setting of semi-supervised domain generalization (SSDG) when both unlabeled source and target domains can contain unknown classes, i.e. not seen as labeled instances in source domains. The goal here is to learn a classifier which will be able to (i) reliably distinguish seen classes from unknown classes and (ii) effectively utilize unlabeled source data to learn domain generalizable representations. To approach the aforementioned goals authors suggest using one-vs-rest classifiers and class adaptive thresholds to distinguish between known and unknown classes. And then they apply different optimization objectives for known/unknown classes to improve learning domain generalizable representations. The proposed methodology leads to improvements over considered baselines on the standard benchmarks. Strengths: - The proposed problem formulation is important Weaknesses: - Though the proposed setting is indeed realistic, the experimental setup is limited. It would be helpful to consider WILDS [1, 2] or other competitive benchmarks to understand the applicability of the proposed methodology. - The proposed methodology looks to me like combination of many well-known techniques without thorough motivation and deep analysis. [1] Pang Wei Koh et al' WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021 [2] Shiori Sagawa et al' Extending the WILDS Benchmark for Unsupervised Adaptation. https://wilds.stanford.edu Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How ImageNet pretraining affects the performance of the proposed method? What would be the performance if the method is trained from scratch? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors do not discuss limitations. One possible limitation that I see is that the proposed setting decides not to distinguish between different unknown classes. It would be interesting to push the limits of semi-supervised learning and study the ultimate setting when both domain/class shift are presented and the classifier is able to distinguish between different unknown classes. Some related papers include but not limited to [1, 2] [1] Kaidi Cao, Maria Brbic, Jure Lescovec. Open-World Semi-Supervised Learning. ICLR 2022 [2] Sagar Vaze, Kai Han, Andrea Vedaldi, Andrew Zisserman. Generalized Category Discovery. CVPR 2022 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: Though the proposed setting is indeed realistic, the experimental setup is limited. It would be helpful to consider WILDS or other competitive benchmarks to understand the applicability of the proposed methodology. A: Thanks for your kind suggestions. The experiments of our paper are conducted on the datasets which are usually used within this research community [1, 2, 3], and the results verify the effectiveness of our method. We will conduct more experiments on the mentioned benchmarks. [1] Shu et al, Open Domain Generalization with Domain-Augmented Meta-Learning, CVPR'21. [2] Zhou et al, Domain Generalization with MixStyle, ICLR'21. [3] Zhang et al, Towards Unsupervised Domain Generalization, CVPR'22. W2: The proposed methodology looks to me like combination of many well-known techniques without thorough motivation and deep analysis. A: Semi-Supervised Domain Generalization aims to use unlabeled data to improve the generalization of the model on *unseen* domains. Unfortunately, in a more realistic scenario, the unlabeled data usually contains *unknown classes* which may significantly degrade the generalization of the model, and the existing methods are not applicable to this scenario. The main contribution of our paper is that we propose a feasible solution with popularly used basic components, e.g. one-vs-all classifiers [4], class-wise adaptive thresholds [5] and consistency regularization [6], for this realistic scenario. The experiments conducted on realistic datasets verify the effectiveness and superiority of our method. [4] Rifkin et al., In Defense of One-vs-all Classification, JMLR'04. [5] Gui et al., Towards Understanding Deep Learning from Noisy Labels with Small-loss Criterion, IJCAI'21. [6] Sajjadi et al., Regularization with Stochastic Transformations and Perturbations for Deep Semi-supervised Learning, NIPS'16. Q2: How ImageNet pretraining affects the performance of the proposed method? What would be the performance if the method is trained from scratch? A: Thanks for your kind suggestions. In order to verify the effectiveness of our method, we train the model from scratch with the compared methods and ours. The results on Officehome with 25:20:20 (*known classes*, *seen unknown classes* and *unseen unknown classes*) are summarized in Table 2. It can be found that our method outperforms other compared methods. Table 2: Leave-one-domain-out average results of *known classes* accuracy and *unknown classes* AUROC on OfficeHome. | | Accuracy | AUROC | |--------------|:-----------------:|:-----------------:| | DeepAll | 21.75 | 54.07 | | UDG | 19.91 | 55.95 | | DAML | 29.35 | 55.55 | | FixMatch | 40.68 | 57.25 | | OpenMatch | 28.76 | 56.22 | | StyleMatch | 42.57 | 56.72 | | CWAEE (ours) | **43.66** | **61.29** | --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. My concerns were partially addressed. I have updated my score to weak accept. Still, please do provide additional benchmark on more challenging datasets for the future revision. Providing the new setting requires deeper experimental study of the proposed methodology and its failure modes and will further strengthen the work. --- Reply to Comment 1.1.1: Comment: Thanks for your comments and kind suggestions. We will conduct more experimental studies on the proposed method in the new setting with additional datasets in the future revision.
Summary: The paper considers the realistic semi-supervised domain generalization setting where known classes are mixed with some unknown classes in the unlabeled training and testing data, and proposes the Class-Wise Adaptive Exploration and Exploitation (CWAEE) method. The experiments conducted on the datasets show the advantages over the previous baselines. Strengths: 1. Most previous semi-supervised domain generalization methods assumed that there is no unknown class in the unlabeled training data and testing data. The paper relaxes this assumption and considers a more challenging setting. The setting is new and important. 2. The paper proposes a two-step method. It first uses one-vs-rest classifiers and class-wise thresholds to detect the known and unknown classes, and then uses Fourier Transform and data augmentation to improve the generalization on target domain and unknown classes. 3. The method performs well on the datasets and the ablation study is sufficient to support the results. Weaknesses: 1. Fourier Transform based on data augmentation may not work for non-image tasks. 2. Some techniques are similar to the well-known method FixMatch. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My concerns for this paper are as follows: 1. Why does the method use the two-component beta mixture model to calculate the thresholds? 2. Why do the known classes have higher scores than the unknown classes? I think the authors should discuss more about this. 3. Will the number of unknown classes influence the performance? For example, the number of unknown classes is far larger than that of known classes in the unlabeled training data. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: no negative societal impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Why does the method use the two-component beta mixture model to calculate the thresholds? Q2: Why do the known classes have higher scores than the unknown classes? I think the authors should discuss more about this. Q3: Will the number of unknown classes influences the performance? For example, the number of unknown classes is far larger than that of known classes in the unlabeled training data. A: We train one-vs-rest classifiers for each *known class*, and the output scores of each one-vs-rest classifier indicate whether the sample belongs to the class or not. The samples of *unknown classes* do not belong to *known classes*, so the scores of *unknown classes* are usually lower than that of *known classes* on the corresponding one-vs-rest classifier. The score distributions of *known class* and *unknown classes* are usually different, as shown in Figure 4 in our paper. Thus, we use a two-component beta mixture model to fit the score distributions and set the thresholds with the means of two beta distribution components on each one-vs-rest classifier. Since we train the one-vs-rest classifier for each *known classes*, the number of *unknown classes* in unlabeled data will not influence the performance of our method on *known classes*, though the performance of our method on *unknown classes* may be influenced not significantly. We conduct more experiments on OfficeHome to verify the effectiveness of our method. We split the original label set into 5:25:20 and 5:50:10 (*known classes*, *seen unknown classes* and *unseen unknown classes*) to make the number of *unknown classes* far larger than that of known classes, and the results are summarized in Table 1. It can be found that the number of *unknown classes* does not influence the *known classes* accuracy of our method significantly, though the *unknown classes* AUROC drops slightly. Table 1: Leave-one-domain-out average results of *known classes* accuracy (left of the slash) and *unknown classes* AUROC (right of the slash) on OfficeHome. | | 5:25:20 | 5:50:10 | |--------------|:-----------------:|:-----------------:| | DeepAll | 86.72 / 81.39 | 86.72 / 81.39 | | UDG | 80.70 / 70.77 | 82.13 / 73.65 | | DAML | 60.14 / 60.24 | 62.13 / 61.16 | | FixMatch | 78.59 / 69.84 | 74.69 / 59.51 | | OpenMatch | 88.20 / 82.31 | 86.88 / 82.37 | | StyleMatch | 82.55 / 63.59 | 77.36 / 54.67 | | CWAEE (ours) | **89.40 / 86.69** | **88.44 / 83.80** |
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning non-Markovian Decision-Making from State-only Sequences
Accept (poster)
Summary: This paper proposes an offline model-based method for learning a non-Markovian Decision Process (nMDP), i.e. the environment is Markovian but the policy is related to previous histories. In the dataset, we can only observe the sequence of states. This paper proposes a Lan-MDP algorithm, which estimates the transition and policy through MLE. Specifically, Lan-MDP leverages Langevin dynamics to calculate the weights in the posterior distribution. The algorithm then transforms behavior cloning into a reward-maximization problem by constructing a reward with the estimated transition/policy to do planning for sequential decisions. Strengths: The paper aims to tackle the issue of non-Markovian imitation learning where we can only observe a dataset of sequential state (with no actions), which is known to be very hard due to long history dependency (generally NP-hard, see [1]). [1] Learning in Observable POMDPs, without Computationally Intractable Oracles, Noah Golowich, Ankur Moitra, Dhruv Rohatgi Weaknesses: In my humble opinion, I think the author could illustrate more examples (either in the real-world or synthetic problems) that satisfy the setting they proposed. POMDP should be one important example that is confounded/has a non-Markovian structure, but its transition doesn't have a Markovian structure. Intuitively, the history-dependency of the transition would cause difficulty in planning/learning in non-Markovian problems ([1]), and I am not sure Lan-MDP could tackle this. [1] Learning in Observable POMDPs, without Computationally Intractable Oracles, Noah Golowich, Ankur Moitra, Dhruv Rohatgi Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Line 226: Additional 'including' could be a typo Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback! > Q: Could LanMDP tackle POMDP? Indeed, with the Markovian assumption in transition, the proposed model cannot directly extend to POMDP, where the transition is non-Markovian. However, it is worth a trial with the general modeling of latent EBM with non-Markovian transition and policy.
Summary: The paper considers the problem of imitation learning from state observations in a setting where the transition dynamics are Markovian, but the (unknown) reward function and, therefore, the optimal policy are not. The problem is framed as maximizing the likelihood of the expert's state marginal based on a generative process that is parameterized via a history-based EBM-policy and a nonlinear Gaussian transition model (hence, the method is model-based). To optimize the likelihood of the state-only trajectory with respect to these action-related models, the method exploits that the gradient of the state-likelihood is equivalent to the expected gradient of the state-action likelihood, $\nabla\_{\theta}\log p\_{\theta}(\tau\_{S}) = \nabla\_{\theta} E\_{p\_{\theta}(A|S)}[\log p\_{\theta}(\tau\_{S,A})]$, where $p(A|S)$ is the posterior distribution (which takes into account that $s\_{t+1}$ contains information about $a\_{t}$). As the optimization involves expectations with respect to intractable models (not only the posterior, but also the prior, that is, the EBM policy is intractable), the proposed method uses a combination of Langevian MCMC and importance sampling. The method is tested on a toy-problem, where a 2D point-mass should get velocity-controlled towards a goal location, while following a 3rd-order polynomial path (which corresponds to a non-Markovian reward). When providing a sufficiently large history to the policy, it can indeed learn to exhibit the desired behavior. In the main experiments on (Markovian) Mujoco locomotion tasks, the method is compared to GAIL, and behavioral cloning (BC), which both get groundtruth access to the expert actions, and their counterparts GAIfO, and behavioral cloning from observations (BCO) that only use state observations. In these experiments the proposed method outperforms the competing imitation learning from observation methods and matches or even outperforms the baselines that have priviledged information about expert actions. Strengths: - The proposed method is novel and the formulation as an MLE problem is interesting. - The problem setting is very relevant. Imitation learning from observations is an important topic because oftentimes we can not observe the expert's actions. Also learning from non-Markovian policies is important, because for general tasks, we can not assume that the provided state-action space is Markovian. - The claims seem correct. I checked the derivations and could not spot any issues. - The paper is well-written and clear. Weaknesses: Technical Soundness -------------------------- In the derivation that I sketched in the summary, the actions would not be grounded. The maximum likelihood objective would be an offline objective (similar to behavioral cloning), where the "actions" are latent variables with no particular meaning. Hence, for computing the gradient with respect to the transition model, the paper mixes in samples from policy rollouts. While grounding the latent variables in such manner is intuitively reasonable and based on the experiments sufficient to learn meaningful policies, it also seems to break the derivations. For all I can tell, the resulting method does no longer maximize the log-likelihood of the state trajectory. Experiments ---------------- - In the toy experiments, behavioral cloning fails to produce the desired cubic trajectories, even when using a history-based policy. The paper states that this might be caused due to the fact that their BC implementation uses a Gaussian (and, thus, unimodal) policy, whereas the proposed method uses a much more expressive EBM. I don't see why the task would require a unimodal policy because with a sufficiently large history (>= 4), there is at most a single action that stays on a 3-rd order polynomial path. Furthermore, it would be straightforward to use a multimodal policy such as a normalizing flow, or for an even fairer comparison an EBM (which could also be trained using Langevian MC estimates of the partition function, or using more advanced techniques). Of course, the comparison to BC is not fair anyways, since it does not make use of online samples. I wonder why GAIfO was not trained in this toy setting. - I could expect the method to outperform BC and BCO mainly due to the fact that those methods can suffer from compounding errors (BCO is online, but uses the interactions only for labeling expert actions using a learned inverse dynamic model). GAIL and GaifO are online methods, but they are not very sample efficient as they are not model-based, and so it's also not surprising that they underperform in the low data regime. Model-based imitation learning from observations alone (MobILE) would seem to be a more appropriate baselines, and as mentioned in the supplementary the modified Hopper and Walker environments where even taken from this work. Furthermore, least-squares inverse Q-Learning (LS-IQ) (Al-Hafez et al., 200) was tested in the imitation learning from observation setting using an inverse dynamic model, where it also cleraly outperformed GaifO. - The method relies on MCMC during training, but also during inference. The submission does not evaluate the compuational overhead resulting from this design choice. - Table 1 seems to be based on a single seed. References --------------- Al-Hafez, F., Tateo, D., Arenz, O., Zhao, G., & Peters, J. (2022). LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning. In The Eleventh International Conference on Learning Representations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It is not clear how the goal position is specified for the toy experiment. Is it added to the state space? - Line 77 argues that it non-Markovian rewards and Markovian states are usually sufficient because we can always define a state space that makes the dynamics Markovian. Couldn't we just as well argue that we can always define a state space that makes the reward function Markovian? - What is the computational overhead caused by MCMC? - What is the motivation for using an EBM instead of a more tractable policy such as a normalizing flow? - Line 192 states that Gaussian expectations can be approximated by the mean and refers to Kingma & Welling. What statement are you exactly referring to? - Is there any theoretical justification for mixing in online samples in the maximum likelihood objective? - Line 222. What is y' ? is it the target y location? Should it then also be $x' = 1$ instead of $x = 1$? - What is the main insight of Section 4? That there exists a reward function such that we can frame the MLE problem as a MaxEnt-RL problem? What are the implications? - Why is it overimitation if the agent imitates the cubic path of the expert? In the cited psychology experiments there was an mechanism and an extrinsic reward and therefore overimitation can be defined as imitating features that are not relevant for obtaining the extrinsic reward. How is overimitation defined in a setting without extrinsic reward? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While the paper mentions the limitation that MCMC may become prohibitively expensive, the paper does not provide evaluations of the computational cost. Also, MCMC can suffer from high variance and bad mixing time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback! > Learning objective The mixing of gradients in Eq.8 is derived from jointly optimizing two log-likelihood, Eq. 5 and $L_{online}(\beta) = \sum_{i=1}^{m} \sum_{t=1}^{T} \log p_\beta (s_{t+1}^i | s_{t}, s_t)$. We will include this explanation in the revision. > Confusion about BC's result with sufficient history in the toy experiment “Behavioral cloning fails to produce the desired cubic trajectories, even when using a history-based policy” is a misunderstanding. Thanks to this comment, we realize that putting the result of behavior cloning with sufficient history in the supplementary and leaving only the one with insufficient history in the main text could be confusing. Please kindly refer to the two subfigures at the bottom right corner of Fig. 1 in the supplementary for the result aligning with your prediction. Since there seems to be a typo in “I don't see why the task would require a unimodal policy”, we reckon that the reviewer may also be wondering about the gap between LanMDP context 4 and BC context 4 in Fig. 2f. We hypothesize that this might be attributed to the larger compounding error in BC. To isolate the influence of compounding error, we design an experiment where we measure the residual error of the next state after filling the historical contexts in the learned LanMDP context 4 and BC context 4 with expert states, rather than sampled states. Interestingly, the error is around 0.0004 for both LanMDP and BC, closing the gap in Fig. 2f. The implication seems to be LanMDP is more robust to compounding error than BC. > In the toy task, how about baseline models other than BC 1. Fair comparison with BC: As stated in L230-232, “in a deterministic environment, there should not be a difference between BC and BCO, as the latter basically employs inverse dynamics to recover action labels. For our model, this simple transition can either be learned or implanted. Empirically, we don’t notice a significant difference.” 2. Other multimodal latent policy such as normalizing flow model: We agree that it would be interesting to explore these models in future work. Our major consideration in choosing latent EBM is that it makes the least statistical assumptions (though the computational assumption of MCMC may be strong), such that the exposition of the decision-making process (Sec. 2) is clean to people from both literatures of generative modeling and decision making. 3. Comparison with EBM with action labels: In a deterministic environment with such a simple Markovian transition, sampling the posterior $p(a_t | s_{0: t+1})$ can be done by inverse dynamics $g(a_t | s_t, s_{t+1})$ in which $a_t$ can be uniquely identified. So there is actually no difference between latent EBM with grount-truth transition and EBM with action labels. 4. Comparison with GAIfO: Unlike BC which can be extended to non-Markovian setting, GAIFO, as a state-matching method, is limited to Markovian tasks. > Model-based baselines We have been experimenting with the public code of MobILE on our demonstration sets since we were preparing this submission but haven’t obtained results comparable to those reported in their paper. Unfortunately, the contacts we sent out to the authors have all gone unanswered. Thanks for pointing us to the LS-IQ paper. Since the major contribution of our work is not model-based Markovian imitation, we didn’t exhaust every paper in the Markovian domain thus wasn’t aware of this one, but we are definitely awed by its performance against the expert. We will include the training curves of LS-IQ in the revision. > MCMC cost We add experiments to measure the time of posterior sampling, the time of prior MCMC sampling, and pure training time. As shown in Table 2, the posterior sampling is more consuming than prior sampling due to the additional computation of gradient of transition model. Importance sampling can bypass the additional cost since it involves only prior sampling. > Table 1 The results are actually based on 5 seeds. We have updated the table to includ the standard deviation in the attached pdf file. > Goal position? Overimitation and extrinsic reward? It is more or less a philosophical question how humans understand the concept of “goal” through imitation. In L191, we stated that the “goal” means the last state $s_T$ in the sequence. The reviewer seems to have a different idea since in the cited psychology experiments, the object to be picked up at the end of demonstration sequences seems to be an extrinsic reward that further hints the “goal”. We hope the reviewer would like to accept our hypothesis that agents may regard the last state as "goal" since it is simpler than the suggested one, and hence more preferable by the principle of Occam’s razor. Under this hypothesis, it is clear that overimitation is visiting the demonstrated goal with some causally unnecessary states in demonstrations. > non-Markovian rewards and Markovian states are usually sufficient? We hope to remind the reviewer of the distinction between sufficiency and necessity. Indeed, a non-Markovian decision-making problem is not necessarily represented by non-Markovian rewards and Markovian transitions. Our motivation for this design choice is communicated in L82-84. > Insight of Sec. 4 As the reviewer points out, Sec. 4 proves there exists a reward function such that we can frame the MLE problem as a non-Markovian MaxEnt-RL problem, and this problem is solvable if MaxEnt-RL can be generalized to non-Markovian domains. This may further allude that the decision theory, especially offline MaxEnt-RL, might be a consequence of emergence from generative sequence modeling, rather than the essence.
Summary: The paper proposes a generative model for learning non-Markovian decision-making from state-only sequences, where the policy is an energy-based prior in the latent space of the state transition generator. To solve the problems, the authors develop a maximum likelihood estimation method (LanMDP) for model-based imitation learning, which involves MCMC and importance sampling techniques. Besides, the papers shows how the learned model can be used for decision-making as inference, where model-free policy execution and model-based planning are equivalent to prior and posterior sampling, respectively. Experiments on a path planning task with non-Markovian constraints and several domains from the MuJoCo suite, LanMDP achieves comparable or superior performance than baselines (BC, LfO and GALfO). Strengths: State-only imitation is a important and a widely studied topic. This paper proposes an interesting idea of model-based imitation, and combines MCMC and importance sampling to estimate the model parameters from state-only data. The paper demonstrates the versatility of the learned model for both model-free and model-based decision-making, where inference is performed by sampling from the prior or the posterior. The empirical evidence also supports the effectiveness of LanMDP. Weaknesses: I am not familiar with of the domain of nMDP, and I would like to share my concerns about the method and experiments: My major concern is that the proposed method relies on MCMC and importance sampling techniques to estimate the model parameters and perform inference. However, these techniques can be computationally expensive, sample inefficient, and sensitive to initialization and tuning. Therefore, it is unclear how well the proposed method scales to more realistic and challenging scenarios. Besides, the proposed method assumes that state-only sequences are sufficient to infer and imitate non-Markovian decision-making behaviors. In some cases, state-only sequences may not be enough to capture the relevant information or causal factors that influence decision-making, such as intentions, preferences, beliefs, or emotions. In such cases, state-only sequences is enough to learn meaningful or generalizable policies. The motivation is unclear. For example, what is the advantage about using nMDP? Why does model-based imitation outperforms conventional methods? Additionally, I feel that there is a lack of cohesion between Sections. For example, Section 3 titles as 'Learning and Sampling'. However, it is separate from the previous Section and I do not know learning what and sampling what when I first read. For experiments, the baseline methods are all from the 'imitation from observation' domain, which are built based on the standard MDP. I encourage authors to make comparisons with more related domain, i.e., methods that learn non-Markovian decision-making from state-only sequences. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Here are some questions for the authors: 1. How do you evaluate the quality and diversity of the samples generated by the model? Do you use any quantitative metrics or qualitative visualizations to assess the fidelity and diversity of the generated sequences? 2. How do you compare the proposed method with other methods that learn non-Markovian decision-making from state-only sequences, such as inverse reinforcement learning or variational inference methods? What are the advantages and disadvantages of each approach? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The major limitations I am concerning are presented in 'Weakness' part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! > Computational cost of MCMC. We recognize this concern. But we are rather optimistic in the short-run MCMC methods [1] since in realistic tasks the dimensionality of the action space is normally small in comparison with the state space. We add experiments to measure the time of posterior sampling, the time of prior MCMC sampling, and pure training time. As shown in Table 2, the posterior sampling is more consuming than prior sampling due to the additional computation of gradient of transition model. Importance sampling can bypass the additional cost since it involves only prior sampling. > ​​Assumes that state-only sequences are sufficient to infer and imitate non-Markovian decision-making behaviors? We feel that this is a misunderstanding. We hope the reviewer would like to clarify what “sufficient” means. If it means state-only sequences are insufficient to uniquely identify a latent action, we certainly agree. Actually, existing works attribute the unsatisfying performance of inverse dynamics methods to such insufficiency [2]. We indeed take this into consideration in model design by making the policy multi-modal. In terms of “intentions,... emotions”, we want to argue that people don’t, for example, carry a belief label when behaving, but we can infer it from our observations. Therefore, these are all latent variables, just like the latent actions in our model. > The motivation is unclear. The motivation is communicated in the introduction. Specifically, we want to draw the reviewer’s attention to L28-29, where we reveal the limitation of the conventional imitation method due to the reliance on Markovian assumption and TD learning. The motivation is further illustrated with a toy task in Sec. 5.1. We hope the reviewer would like to discuss with reviewers W5w2 and wvit the relevance of nMDP. We believe “model-based imitation outperforms conventional methods” is a misconception of our results. For example, BCO in Fig. 3 is also a model-based method. Our employment of a model-based method is largely due to the restrictions of latent actions and nMDPs, where model-free methods may not directly apply. We reckon that this misconception may attribute to the statement “We develop maximum likelihood estimation to achieve model-based imitation” in the abstract (L8), for it implies certain senses of purpose towards model-based imitation despite our real intention of conveying all constituents of nMDP (L6) are learned. We will resolve this confusion in the revision. > A lack of cohesion between Sections. Thanks for this feedback! We believed the structure of this paper was implied in the last two paragraphs (L36-62) of the introduction, and the format of “modeling, learning, analysis” was conventional in machine learning papers. We will add a paragraph at the end of the introduction to explicitly sketch the structure in the revision. In short, in Sec. 2 we introduce the model; in Sec. 3 we introduce learning this model with neural networks as function approximators, which involves sampling; in Sec. 4 we analyze the connections between the learned model with sequential decision-making problems. > Comparisons with methods that learn non-Markovian decision-making from state-only sequences. To the best of our knowledge, there is no established research in this particular setting. > Quantitative metrics or qualitative visualizations for the fidelity and diversity of the generated sequences? The quality of generated sequences in the cubic planning task is evaluated by the residual errors and acceptance rate detailed in L233-239. In a word, we evaluate if the model learns to generate cubic sequences. This can also be qualitatively evaluated by the figures we plot in Fig. 2. For Mujoco tasks, the quality is assessed by the final score of the test trajectory. To answer the reviewer’s request, we upload some videos for qualitative evaluation. Thanks for raising our attention to diversity, for which we don’t have a systematic metric currently. In a published benchmark for unsupervised RL [3], such a measure is also omitted. Other sequence models for decision-making don’t have such evaluation either. We agree this is an important concern and will include this limitation in the revision. > Comparison with as IRL or variational inference methods? We hope the reviewer has been reminded in previous answers that there seems to be no established work for non-Markovian IRL yet due to IRL’s reliance on the Markovian limitation. We pick behavior cloning (BC) as the major baseline because it is free from the Markovian assumption. Putting the Markovianness aside, we believed with abundant data, behavior cloning with action labels empirically upper bounds the performance of IRL. This is supported by the result from [4,5], as well as our results in FIg. 3. We are very much anticipating future progress from RL’s perspective. We feel thrilled for the reviewer mentioned variational inference as an alternative to MCMC. It is actually the next project in our plan, as we believed learning amortized sampler for latent EBM in nMDP deserved a standalone paper. [1] Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. Advances in Neural Information Processing Systems, 32, 2019. [2] Zhu, Zhuangdi, et al. "Off-policy imitation learning from observations." Advances in Neural Information Processing Systems 33 (2020): 12402-12413. [3] Laskin, Michael, et al. "URLB: Unsupervised reinforcement learning benchmark." Neural Information Processing Systems Track on Datasets and Benchmarks (2021). [4] Goo, Wonjoon, and Scott Niekum. "Know Your Boundaries: The Advantage of Explicit Behavior Cloning in Offline RL." (2022). [5] Florence, Pete, et al. "Implicit behavioral cloning." Conference on Robot Learning. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. I have carefully read the response and authors' discussion with other reviewers (especially with Reviewer W5w2). I think the response addresses my major concerns about computational cost and the motivation of the paper. I feel more positively about the paper and would like to raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you very much!
Summary: The goal of this approach is to learn from state-only sequences, in the case where action labels are not present, especially in non-markovian settings. The method formulates a non-Markovian Decision Process (nMDP) with latent actions as a Latent-space Energy Based Model, showing that the inference on the model is equivalent to policy execution/planning. Using a dynamics model and posterior sampling, the learned model can be used for planning, even for reaching new goal states. The paper alsow draws a connection of the proposed approach to max-ent RL. The results show good performance for non-markovian (and even some markovian) tasks. Strengths: - This paper addresses an interesting problem of learning in non-Markovian settings - The application to learning from action-free data is very interesting as well - The approach outperforms IRL and BCO baselines on toy and control tasks - The approach is sound and novel (to my knoweldge) Weaknesses: I think there could be more analysis on the different types of non-Markovian tasks (i.e. tasks that require more long term versus short term memory). In general, more robotics focused applications would be a better showcase for this method. The context seems to be important - it would be good to investigate howhe use of sequence based architectures such as the Decision Transformer (Chen et al., 2021) fits into this general problem as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses section Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: These are sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the feedback! > I think there could be more analysis on the different types of non-Markovian tasks (i.e. tasks that require more long term versus short term memory). In general, more robotics focused applications would be a better showcase for this method. The context seems to be important - it would be good to investigate howhe use of sequence based architectures such as the Decision Transformer (Chen et al., 2021) fits into this general problem as well. We agree that systematically categorizing long-/short-term memory in nMDPs and testing various sequential decision-making architectures [1,2] in different regimes are crucial research topics. However, to the best of our knowledge, research as such awaits more infrastructure due to the lack of systematic benchmarks. We hope our work can invoke more efforts in the emerging area of non-Markovian decision-making, which could get us more prepared for the suggested investigation of the influence of different sequence-based architectures. We’d also love to explore in futhre work more robotics-focused applications beyond the toy or simulated control tasks presented here. [1] Janner, Michael, Qiyang Li, and Sergey Levine. "Offline reinforcement learning as one big sequence modeling problem." Advances in neural information processing systems 34 (2021): 1273-1286. [2] Ajay, Anurag, et al. "Is conditional generative modeling all you need for decision-making?." arXiv preprint arXiv:2211.15657 (2022).
Rebuttal 1: Rebuttal: We would like to thank all reviewers for your helpful feedback! We add two tables for some additional experiments in response to your requests. Table 1 includes a new baseline for the MoJoCo task, as well as the omitted std of the proposed model. The proposed model still exhibits comparable performance to the baselines. Table 2 measures the time for prior and posterior sampling. It supports that importance sampling indeed improves efficiency. Pdf: /pdf/a91b33253e1eaf2211506900bdda35827f2c0fad.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new approach to imitation learning from observation (ILfO) on a non-Markovian decision process (nMDP). To achieve this, the authors derive their own objective using maximum likelihood estimation, drawing inspiration from deep generative modeling of state-only sequences. After providing a connection between probabilistic inference and decision-making, the authors conduct experiments on cubic curve planning and MuJoCo control tasks. Strengths: While there are several works addressing ILfO for MDPs, this work appears to be the first attempt to handle ILfO for nMDPs. To this end, the authors derive their own objective based on probabilistic inference. Weaknesses: **[Hard to understand]** this paper is hard to understand due to the lack of explanation. For example, there are various policy-like terms as $p\_\theta(A|S)$, $p\_alpha(a\_t|s\_{0:t})$, and $p\_\theta(a\_t|s\_{0:t+1})$. I believe $p\_\alpha(a\_t|s\_{0:t})$ corresponds to the policy of the current agent in RL. However, in the context of RL, there is no explicit explanation provided for what $p\_\theta(A|S)$ and $p\_\theta(a\_t|s\_{0:t+1})$ correspond to. In addition, the final objective function is not formulated. **[Lack of baselines]** In the context of RL, there are two ways to improve sample efficiency: off-policy RL and model-based RL. Thus, at least one of the off-policy ILfO algorithms (e.g., OPOLO [1]) and one of the model-based ILfO algorithms (e.g., MobILE [2]) should be considered as baselines. **[Objective]** First of all, I wonder if the proposed objective (5) is valid in a stochastic environment. Indeed, the learned $p\_\beta$ can differ from the true transition in a stochastic environment, although it is possible to learn an accurate transition with enough online samples. Moreover, (8) may also allow for learning an accurate transition with enough online samples, but it is modified without any theoretical justification from the original objective (5). **[Exploration]** I think LanMDP might fail to obtain a good policy in a stochastic environment (i.e., with stochastic transition probabilities) because there is no exploration. For example, if the algorithm obtains $(s,a,s’)$ during interaction, then there is no need to find other possible actions $a’$ that might be better to reach $s’$ from $s$. **[Motivation of task]** This work aims to solve ILfO for nMDPs. However, there are no motivating examples explaining why ILfO for nMDPs should be studied. Specifically, even though the reward function is non-Markovian, the expert policy should not necessarily be a non-Markovian expert policy. Moreover, while the expert policy is non-Markovian, it is not clear if existing ILfO algorithms fail to solve ILfO for nMDPs. **[Motivation of LanMDP]** It is not clear why we should use decision making as inference instead of a simple modification of prior works. Indeed, we can also handle ILfO on an nMDP by modifying MobILE - for example, if we replace $(s,a)\sim\mathbb{E}\_{d^\pi}$ and $s\sim D\_e$ with $\tau\sim\mathbb{E}\_{\pi}$ and $\tau\_s\sim D\_e$ for a non-Markovian policy $\pi$ and a state-only trajectory $\tau\_s$, then it can be used to solve ILfO on an nMDP. [1] Zhu, Zhuangdi, et al. "Off-policy imitation learning from observations." Advances in Neural Information Processing Systems 33 (2020): 12402-12413. [2] Kidambi, Rahul, Jonathan Chang, and Wen Sun. "Mobile: Model-based imitation learning from observation alone." Advances in Neural Information Processing Systems 34 (2021): 28598-28611. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Mentioned in the weakness + following questions - (related to [Lack of baselines]) In experiments, I believe that at least one of the off-policy ILfO algorithms (e.g., OPOLO) and one of the model-based ILfO algorithms (e.g., MobILE) should be added as baselines. - (related to [Exploration]) Compared to MobILE, why does LanMDP not need to consider an exploration term? In addition, is there no risk in a stochastic environment without exploration such as mentioned in weakness part? - What is the final objective of LanMDP, which is used for gradient-based updates? Additionally, compared to the baselines (OPOLO and MobILE), what is the main contribution of LanMDP? - I think $R: S^+\rightarrow \mathbb{R}$ is a generalization of $R(s)$. Is there any reason not to use $R: S^+\times A^+\rightarrow \mathbb{R}$, which is a natural generalization of $R(s,a)$? - In equation (13), we need to compute $c(a\_t;s\_{0:t+1})$, which involves an expectation over $p\_\alpha I believe this part involves intractable integration in the continuous domain. If the authors use MCMC to address this issue, I am curious about the computational time it takes (compared to other baselines). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback! > [Hard to understand] this paper is hard to understand due to the lack of explanation. For example, there are various policy-like terms as $p_\theta(A|S)$, $p_\alpha(a_t|s_{0:t})$, and $p_\theta(a_t|s_{0:t+1})$. I believe $p_\alpha(a_t|s_{0:t})$ corresponds to the policy of the current agent in RL. However, in the context of RL, there is no explicit explanation provided for what $p_\theta(A|S)$ and $p_\theta(a_t|s_{0:t+1})$ correspond to. We believe we have provided sufficient explanation for every notation we use. For example, in L110-111, we explain that $p_\theta(A|S)$ denotes $p_\theta(a_{0:T-1}|s_{0:T})$, a posterior probability in which $A$ denotes the complete action sequence $a_{0:T-1}$, $S$ denotes the complete state sequence $s_{0:T}$. We understand that this posterior probability isn’t usually seen in the literature of RL. So we explicitly define $p_\theta(a_{0:T-1}|s_{0:T})$ in Eq. 4. The notation of $p_\theta(a_t|s_{0:t+1})$ also follows the conventions of conditional distribution and Bayesian statistics. > [Lack of baselines] In the context of RL, there are two ways to improve sample efficiency: off-policy RL and model-based RL. Thus, at least one of the off-policy ILfO algorithms (e.g., OPOLO [1]) and one of the model-based ILfO algorithms (e.g., MobILE [2]) should be considered as baselines. We appreciate the reviewer's suggestion to include off-policy and model-based ILfO algorithms as baselines. We initially considered MobILE as a baseline since it is also a model-based method for imitation from observation. However, our experiments with MobILE, using various sets of expert demonstration trajectories, did not yield satisfactory results. We will include this observation in the revision. We have included the result of OPOLO in Table 1 in the attached PDF file. The proposed LanMDP shows comparable performance. > [Objective] First of all, I wonder if the proposed objective (5) is valid in a stochastic environment. Indeed, the learned $p_\beta$ can differ from the true transition in a stochastic environment, although it is possible to learn an accurate transition with enough online samples. Moreover, (8) may also allow for learning an accurate transition with enough online samples, but it is modified without any theoretical justification from the original objective (5). The learning paradigm we employ is maximum likelihood. Eq. 5 is the general maximum likelihood objective, which is supposed to be applicable to any distribution to be learned, no matter whether the environment is deterministic or stochastic. The potential issue of using Eq. 5 alone is that the learned transition may not be grounded because actions are latent. To overcome this issue, we introduce the extra online learning term in Eq. 8, making it deviate from Eq. 5. Apparently, it is derived from jointly optimizing two log-likelihood, Eq. 5 and $L_{online}(\beta) = \sum_{i=1}^{m} \sum_{t=1}^{T} \log p_\beta (s_{t+1}^i | s_{t}, a_t)$. We will include this explanation in the revision. > [Exploration] I think LanMDP might fail to obtain a good policy in a stochastic environment (i.e., with stochastic transition probabilities) because there is no exploration. For example, if the algorithm obtains $(s, a, s’)$ during interaction, then there is no need to find other possible actions a’ that might be better to reach s’ from s. We disagree with this comment. As stated in L182-184, LanMDP is inherently advantageous in exploration thanks to the energy-based modeling of policies. The maximum entropy inclination in energy-based policies has been shown to encourage explorations in [1]. > [Motivation of task] This work aims to solve ILfO for nMDPs. However, there are no motivating examples explaining why ILfO for nMDPs should be studied. Moreover, while the expert policy is non-Markovian, it is unclear if existing ILfO algorithms fail to solve ILfO for nMDPs. We could not help arguing that this critic is unsubstantiated. Actually, the cubic planning task in Sec 5.1 is deliberately included to motivate the importance of non-Markovian modeling. The empirical results justify the insufficiency of the Markovian Imitation Learning methods (with or without action labels) and the efficacy of the proposed model. > [Motivation of LanMDP] It is not clear why we should use decision making as inference instead of a simple modification of prior works. Indeed, we can also handle ILfO on an nMDP by modifying MobILE. As far as we know, there is no established work showing the suggested modification can principally solve the problem of imitation learning in non-Markovian domain. We also believe the non-Markovian extension of state-matching methods is non-trivial. It would be helpful if the reviewer would like to provide us with some pointers. > I think $R: S+ -> \mathbb{R}$ is a generalization of $R(s)$. Is there any reason not to use $R: S+ x A+ -> \mathbb{R}$, which is a natural generalization of $R(s, a)$? In Sec 2, we introduce statistical assumptions in LanMDP. Since actions are assumed to be unobservable in the dataset, we exclude them from the dependency of $R$. It could be an interesting future direction to explore the setting that the reviewer suggested. > In equation (13), we need to compute $c(a_t; s_{0:t+1})$, which involves an expectation over $p_\alpha$ I believe this part involves intractable integration in the continuous domain. If the authors use MCMC to address this issue, I am curious about the computational time it takes (compared to other baselines). We add experiments to measure the time of posterior sampling, the time of prior MCMC sampling, and pure training time in Table 1 in the attached PDF. Importance sampling can bypass the additional cost of posterior sampling since it involves only prior sampling. [1] Haarnoja, Tuomas, et al. "Reinforcement learning with deep energy-based policies." International conference on machine learning. PMLR, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for the response. However, I still have the following questions. 1. **Notation** I want to clarify that I understand the definition of $p\_\theta(A|S)$. However, what I am trying to convey is this: In Equation (2), the authors establish the definition of $p\_\theta$ using $p\_\alpha$ and $p\_\beta$. This may correspond to the prior $\times$ likelihood in Bayesian perspective. Thus, the definition of $p\_\alpha(a\_t|s\_{0:t})$ is given in Equation (3). Given this context, my question is: What is the definition of $p\_\theta(a\_t|s\_{0:t+1})$, which may correspond to the posterior, as presented in Equation (10)? Is it really independent of $s\_{t+2:T}$ and reward terms? I have this question because the authors employ the assumption of Markovian transition to establish the final equality in Equation (10) (i.e., $p\_\theta(a\_t|s\_{0:T})=p\_\theta(a\_t|s\_{0:t+1})$). However, my understanding is that in order to derive this final equality, the **trajectory should be independent of the reward**, given that rewards do not adhere to the Markovian property. However, assuming that trajectories are sampled without observing reward is strange. The authors assume that expert demonstrations maximize this non-Markovian reward, while the learned policy aims to mimic these expert demonstrations. Consequently, the trajectory sampled from the learned policy is inherently dependent on the reward. I am not sure if this claim is accurate, but my intention is to emphasize the **need for a more detailed derivation in order to obtain Equation (10)**. If the authors want to claim that Equation (10) still holds for trajectories dependent on the reward, then it should be derived theoretically. 2. **Baseline** ~~I cannot find the performance of OPOLO in Table 1. In my understanding, Table 1 in the paper compares BC and LanMDP with different context lengths, and Table 1 in the appendix provides the hyperparameters. Could you please provide more specific information about the location of the experimental results of OPOLO in Table 1?~~ Apologies for the confusion. I found Table 1 in the attached file. Based on these results, I believe even more strongly that an additional realistic experiment, perhaps a case that reflects ILfO in nMDP, is required to justify LanMDP. 3. **Objective** The provided explanation is not enough. What I want to convey is the theoretical justification for including $L\_\text{online}$ within $L(\theta)$ for both model and policy learning. Can we be certain that the joint optimization theoretically converges to the desired model and policy? Does this fundamentally resolve the issue of ungrounded transitions with respect to arbitrary $w\_\beta$? If not, there must be more theoretical derivations. 4. *Exploration* (Minor) I missed details in L182-184 and I appreciate your pointing that out. I am still confused about when LanMDP interacts with the environment and when the replay buffer is used. Based on my understanding, LanMDP interacts with the environment initially and during training, particularly in the 'Use energy-based policy with $\alpha\_0$ ...' part and the '**Transition learning**: Update replay buffer with trajectories from the current policy model ...' part in the Algorithm in Appendix A. Additionally, the replay buffer is utilized to compute $L\_\text{online}(\beta)$. If my understanding is accurate, then why is LanMDP more sample efficient than off-policy ILfO, even though it does not sample trajectories from the learned model? Additionally, I would like to understand how the authors utilize the ensemble, as mentioned in L183 of the Appendix. 5. **Motivation** I disagree that the cubic planning task serves as a motivating example. The cubic planning task actually motivates the proposed algorithm to tackle the nMDP, rather than serving as the motivation for the ILfO in the nMDP. To claim that ILfO in the nMDP is truly valuable, it must correspond to real-world problems. If this paper considers ILfO in POMDP, then I think it is valuable, but I cannot find any realistic examples that reduce to the ILfO in the nMDP. Additionally, MobILE can be applied simply to solve ILfO in the nMDP by using $\pi(a\_t|s\_{0:t})$ instead of $\pi(a\_t|s\_t)$ (and in Equation (1) of the MobILE paper, consider using $f(s_{0:t})$ with $(s_{0:t},a_{0:t})\sim D_{\hat{P}}^pi$ and $(s_{0:t})\sim D_e$ instead of $f(s)$, if needed). Compared to this ad-hoc approach, what is the main advantage of LanMDP, which also constructs the objective (8) without theoretical justification? --- Reply to Comment 1.1.1: Comment: Thanks for helping us understand your confusion. 1. **Notation and reward independence?** It seems there is still a misunderstanding. $p_\theta(a_t|s_{0:t+1})$ is the posterior $\frac{p_\alpha(a_t|s_{0:t})p_\beta(s_{t+1}|s_t, a_t)}{p_\theta(s_{t+1}|s_{0:t})}$, in which we use subscript $\theta$ whenever both $\alpha$ and $\beta$ are involved, as defined in L95. We omitted the derivation of Eq. (10) because we believed it was apparent under conditional independence described in our probabilistic graphical model: $p_\theta(a_t|s_{0:T}) = \frac{p_\theta(a_t, s_{0:T})}{p_\theta(s_{0:T})} = \frac{p_\theta(a_t, s_{0:t})p_\beta(s_{t+1}|s_t,a_t)p_\theta(s_{t+2:T}|s_{0:t+1})}{p_\theta(s_{0:t})\int p_\alpha(a_t|s_{0:t})p_\beta(s_{t+1}|s_t,a_t)da_t p_\theta(s_{t+2:T}|s_{0:t+1})} = \frac{p_\alpha(a_t|s_{0:t})p_\beta(s_{t+1}|s_t, a_t)}{p_\theta(s_{t+1}|s_{0:t})}=p_\theta(a_t|s_{0:t+1}) $. In the derivation above, we don’t have explicit variables for reward or return because the expert policy is assumed to have optimized them – they are marginalized variables in the policy distribution. Marginalization, however, does not mean independence. 2. **Baseline** We believe the additional experiments do not rule out our assumption that BC empirically upper bounds the performance of existing methods in Markovian domains, especially in tasks with high-dimensional actions. In non-Markovian domains, BC is still a sufficient baseline since there aren't established ILfO methods. But we agree that the suggested investigation can be an interesting future work. 3. **Objective and theoretical justification** We understand your concerns in jointly optimizing two log-likelihoods (in our case, one marginal likelihood and one conditional likelihood). We are also aware of the emerging literature on RL theory, especially model-based RL, which analyzes the convergence of jointly learning a transition model and a policy. However, we hope to remind the reviewer that intellectual efforts are needed in RL theory partially because its proof of convergence cannot be trivially reduced to maximum likelihood estimation, whose asymptotic theory has been established long ago. Formally speaking, MLE is *asymptotically consistent*, in the sense that if the data were generated by the modeled function family (which fulfills regularity conditions of identifiability, compactness, continuity, and dominance) and we have a sufficiently large number of observations, then it is possible to find the desired model parameters $\theta$ with arbitrary precision [1]. Multiplying a collection of likelihoods is called composite likelihood [2]. As long as all components satisfy the regularity conditions, the asymptotic consistency still holds. [1] Maximum likelihood estimation. (2023, August 9). In Wikipedia. [2] Varin, Cristiano, Nancy Reid, and David Firth. "An overview of composite likelihood methods." Statistica Sinica (2011): 5-42. 4. **Online interaction** We never claim that LanMDP is more sample-efficient than off-policy ILfO. We use an ensemble to stabilize gradients from importance sampling. Specifically, we hold two transition models $f_1, f_2$ trained from different initialization and data shuffling. During importance sampling, a small portion of action samples with significant disagreement in the predictions of $f_1, f_2$ are discarded. Empirically, we find this trick very effective in stabilizing neural network training. Note that this ensemble is not used in online interaction. 5. **Motivating experiments and twisted MobILE** We believe cubic planning is an abstraction of some realistic tasks. For example, in autonomous driving, cubic Bezier curve planning, which involves 3-step lookahead, plays a crucial role in making the path smoother. Our experiments show that only when the non-Markovian context is sufficiently represented, we can learn a policy that does not require any lookahead during execution. While we agree that MobILE is an awesome work, we hope our work can be evaluated as self-contained research. At the level of formulation, the framework of distribution matching in MobILE is an alternative to maximum likelihood estimation in LanMDP, in which any concrete modeling of the distribution-to-be-matched deserves a standalone study. An ad-hoc twist of this rigorous method towards non-Markovianness, were it to lead to wide empirical success, reveals unnecessary assumptions in the twisted method, rather than invalidating other formulations. At the level of specifics, we would like to remind the reviewer that (1) MobILE implicitly assumes the policy is unimodal, see their footnote 2 and 3; and (2) their reward-based formulation leads to the requirement of model-based policy search, which always involves either Monte Carlo sampling or TD learning. In contrast, LanMDP (1) adopts multimodal energy-based policies, making the two footnotes less concerning; (2) bypasses model-based policy search with posterior sampling/importance sampling.
null
null
null
null
null
null
DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization
Accept (poster)
Summary: This paper proposes a method to improve the performance of existing meta-heuristic algorithms using deep neural networks. At the same time, the method described in this paper makes it possible to design a high-performing meta-heuristic without requiring expert domain knowledge. In particular, the study in this paper is generally applicable to many types of COPs. This paper presents a deep neural network model to output the heuristic measure of ACO, as well as a training method for it. In addition, a neural-guided pertubating method is presented in the local search process. Experiments were conducted on 8 different types of CO problems, which showed that DeepACO significantly improves the performance of conventional ACO. It also showed excellent results when compared with the recent NCO studies. Finally this paper is excellently written and well organized. Strengths: **S1.** The method of combining meta-heuristic and deep neural networks is novel and applicable to various types of CO problems. **S2.** The performance of the ACO meta-heuristic has been greatly improved by deepACO. In addition, the experiments were appropriately designed and performed. **S3.** Compared to recent studies on NCO, DeepACO's performance was shown to be superior. Weaknesses: **W1.** In section 5.3, it is difficult to conclusively state that DeepACO demonstrates state-of-the-art performance when compared to the latest NCOs, because only the TSP500 and TSP1000 tasks are being used for comparison with existing NCOs. It would be nice to have comparative experiment results with existing NCOs for more diverse tasks. For example, adding CVRP 500/1000 experiments or adding TSP 100/CVRP 100, etc. **W2.** In section 4.2, details about neural-guided perturbation are lacking. Specifically, there is a lack of explanation regarding the structure of the neural network used for perturbation, as well as details on the input and output of the network, and the processes of learning and inference. ​ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **Q1.** Please explain in more detail about neural-guided perturbation in 4.2. (refer to W2) **Q2.** Since $\tau_{i,j}$ is fixed to $1$ when training a heuristic learner(line 183), $\tau$ is not considered in the training process. However, in Figure 4, DeepACO is robust against changes in alpha. What makes DeepACO robust to changes in alpha? **Q3.** In line 252, you wrote "We extend DeepACO from $PH_{suc}$ to $PH_{items}$ using a Transformer encoder equipped with an MLP decoder". This is a new neural network different from the heuristic learner described in 4.3. Please explain in detail about this neural network. **Q4.** How many TSP test instances were used in the experiment in Table 1 and Table 2? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer d7BW We appreciate your insightful comments and constructive suggestions. Below we present our point-to-point response. --- ### Response to Weaknesses > W1. It would be nice to have comparative experiment results with existing NCOs for more diverse tasks, e.g., CVRP 500/1000 or TSP 100/CVRP 100. Thank you for raising this concern. We have added the suggested experiments and presented the results in our **Global Response**. They demonstrate DeepACO's competitive performance across diverse combinatorial optimization tasks (TSP100, CVRP100, 400, 1000, and 2000). > W2. In section 4.2, details about neural-guided perturbation are lacking, i.e., the structure of the neural network, its input and output, and its processes of learning and inference. Neural-guided perturbation does not involve a new neural network other than that introduced in Section 4.1. It directly leverages the learned heuristic measures $\eta_{\theta}$ to guide the perturbation process. Specifically, each time a solution reaches local optima, it gets perturbed with local search (LS) which iteratively maximizes the heuristic measures of its solution components. Note that NLS can be implemented using an arbitrary LS operator. Take TSP as an example. With 2-opt as the LS operator, NLS alternates between (a) optimizing a solution to minimize its tour length and (b) perturbing it to maximize the total heuristic measures of its edges. The two processes are similar, except that we use the inverse of heuristic measures in (b) while the distance matrix in (a) as indicators of good edges (line 6 in Algorithm NLS). Our training is customized for NLS. The heuristic learner is trained to minimize the expected TSP length of the NLS-refined sampled solutions. Regarding the heuristic learner introduced in Section 4.1, it utilizes the neural architecture described in lines 155-158 and Appendix A. The inputs and outputs of this learner are COP graphs as detailed in Appendix C, and learned heuristic measures, respectively. Its training process is described in Section 4.3, and it follows the ACO algorithm introduced in Section 3 during inference. --- ### Response to Questions > Q1. Please explain in more detail about neural-guided perturbation in 4.2. Please refer to our response to W2. > Q2. What makes DeepACO robust to changes in alpha? Thank you for this insightful question. Traditionally, $\alpha$ is tuned to balance pheromones and heuristics for controlling convergence speed. Both premature convergence and slow convergence could lead to poor solutions. In DeepACO, the learned heuristic measures already provide a close-to-optimal initial search point. As a result, controlling convergence speed becomes less critical. > Q3. Please explain in detail about the neural network used for $PH_{items}$. The neural architecture used for $PH_{items}$ is detailed below and will be added to our paper. The architecture mostly follows the transformer encoder (num_hidden_layers=3, hidden_size=32, num_attention_heads=2) but deprecates its positional encoding. On top of it, we add position-wise feedforward layers (num_hidden_layers=3, hidden_size=32) that map the hidden representations of each solution component into its real-valued heuristic measure. Overall, it is similar to the neural networks used in [1,2]. > Q4. How many TSP test instances were used in the experiment in Table 1 and Table 2? As in previous works, we utilized the datasets released in [3], each comprising 128 test instances. This information will be added to the paper. --- **References** [1] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. [2] Goh, Y. L., Lee, W. S., Bresson, X., Laurent, T., & Lim, N. (2022). Combining reinforcement learning and optimal transport for the traveling salesman problem. arXiv preprint arXiv:2203.00903. [3] Fu, Z. H., Qiu, K. B., & Zha, H. (2021). Generalize a small pre-trained model to arbitrarily large TSP instances. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 8, pp. 7474-7482). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses and additional experiments. I have no further questions. I will keep the rating unchanged. --- Reply to Comment 1.1.1: Title: Thanks for reviewing Comment: We sincerely appreciate your thoughtful review and valuable feedback.
Summary: This paper proposes DeepACO, a generic framework leveraging deep reinforcement learning to automate heuristic designs. Specifically, DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. Experiments demonstrate that DeepACO consistently outperforms its ACO counterparts on eight combinatorial optimization problems (COPs). Strengths: 1. To the best of my knowledge, this work is the first to exploit deep reinforcement learning to guide the evolution of ACO meta-heuristics. 2. Experiments demonstrate that DeepACO consistently outperforms its ACO counterparts on eight combinatorial optimization problems (COPs). Weaknesses: 1. Previous work [1] has proposed a data-driven Heuristics Schedule framework in modern mixed-integer linear programming (MILP) solvers. The authors may want to discuss the major novelty of this work over previous work. 2. It would be more convincing if the authors could show some generalization results, as the ability of generalizing to larger instances is also important in solving MILPs. [1] Chmiela, Antonia, et al. "Learning to schedule heuristics in branch and bound." Advances in Neural Information Processing Systems 34 (2021): 24235-24246. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses for my questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer BitH Thank you for your time and effort in reviewing our submission, and we appreciate the fresh perspective you've offered. Below, we provide a thorough point-to-point response, trying to address each of your remarks. > W1: The authors may want to discuss the major novelty of this work over previous work [1]. The two works tackle different problem domains with different goals, learned components, and training methodologies. DeepACO provides a flexible neural enhancement framework for ACO, while [1] optimizes heuristic scheduling within a branch and bound solver. The techniques are largely complementary. The key differences between the two works are as follows. - **Problem domains.** DeepACO focuses on combinatorial optimization and is based on ACO metaheuristics. In contrast, [1] targets mixed integer programming problems and operates within a branch and bound solver framework. - **Complementary goals.** Many combinatorial optimization problems can be formulated as Mixed Integer Linear Programs. MILP is a powerful tool for solving them exactly, especially for moderate-sized instances. However, for large, complex, and even black-box problems, using MILP may become infeasible. In such cases, (meta-)heuristics can be used to find good-quality solutions efficiently. - **Learned components.** DeepACO learns heuristic measures to guide the ACO construction process. [1] learns ordering and iteration limits for primal heuristics during branch and bound. - **Training methodology.** DeepACO leverages deep reinforcement learning to train neural networks across problem instances. [1] formulates training as an optimization problem using collected heuristic performance data. [1] Chmiela, Antonia, et al. "Learning to schedule heuristics in branch and bound." Advances in Neural Information Processing Systems 34 (2021): 24235-24246. > W2: It would be more convincing if the authors could show some generalization results. Thank you for bringing this to our attention. In Fig. 5, we demonstrate that DeepACO can generalize across scales and distributions while preserving its neural enhancement. Moreover, additional generalization results are presented in our **Global Response**, where we effectively generalize DeepACO trained on TSP100 to TSP1000, and extend DeepACO to even larger-scale CVRP. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. I have read the rebuttal and the other reviews. I have no further questions. --- Reply to Comment 1.1.1: Title: Thanks for reviewing Comment: We sincerely appreciate your thoughtful review and valuable feedback.
Summary: This paper presents DeepACO, a neural-enhanced solution to the limitations of Ant Colony Optimization (ACO) meta-heuristics, namely, the laborious manual design of heuristic measures and their heavy reliance on expert knowledge. ACO, which is a foraging system inspired by ant colonies, deploys artificial ants to explore solution spaces, and these solutions are biased toward promising areas through pheromone trails and heuristic measures. DeepACO innovates by learning a problem-specific mapping from an instance to its heuristic measures, leveraging neural models across instances, and incorporating these measures into ACO to influence solution constructions and help escape local optima. The paper also proposes three extended implementations of DeepACO to balance exploration and exploitation. DeepACO outperforms traditional ACO methods and is competitive against state-of-the-art and problem-specific Neural Combinatorial Optimization (NCO) methods across eight combinatorial optimization problems, even with only minutes of training. Strengths: Novelty: The paper proposes DeepACO, a new approach to enhancing Ant Colony Optimization (ACO) algorithms through a neural-enhanced meta-heuristic. This application of deep reinforcement learning to ACO is stated as being the first of its kind, thereby representing an innovative contribution to the field. Broad Applicability: DeepACO has been tested across eight different Combinatorial Optimization Problems (COPs), indicating its versatility and adaptability. It also shows competitive performance with specialized Neural Combinatorial Optimization (NCO) methods on the Travelling Salesman Problem (TSP), suggesting it is a robust method. Automation: The paper presents a method to automate the design of heuristics for future ACO applications, potentially saving significant manual effort and expert knowledge. Weaknesses: Limitations in Current Implementation: The authors themselves note that DeepACO may underperform when not incorporating local search (LS) components due to the current restriction of compressing all learned heuristic information into an n × n matrix. This could limit the solution's effectiveness in complex COPs. Absence of Comparison to Other Methods: While the paper mentions that DeepACO outperforms other ACO methods and is competitive with specialized NCO methods, there's no mention of a comparison with other non-ACO or non-NCO optimization methods. This could limit the ability to gauge how novel or effective DeepACO truly is in the broader context of optimization techniques. Dependence on Machine Learning: The performance of DeepACO appears to be heavily dependent on machine learning. While this isn't necessarily a weakness, it could limit its application in situations where computational resources are restricted, or where machine learning models are difficult to train due to lacking training data. Unclear Generalizability: Although DeepACO is tested on eight different COPs, it's unclear how it would perform on a broader range of problems, particularly those that don't closely resemble the tested ones. More comprehensive testing across a diverse set of problems and datasets would provide stronger evidence of its generalizability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you provide further insight into the "3D heuristic measures" or "dynamic heuristics" you mention in the conclusion? How might they help to overcome the limitations of the current implementation? How do you see DeepACO being applied in real-world situations? What kind of problems do you envision it solving most effectively? Could you discuss more about the process and challenges of training the deep reinforcement learning (DRL) model for guiding the evolution of ACO meta-heuristics? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer YjbH We appreciate the time and effort you have dedicated to reviewing our submission. Your comments have provided us with a fresh perspective and have undoubtedly enhanced the quality of our work. Below we present our point-to-point response. --- ## Response to Weaknesses > W1: Limitations in Current Implementation: The authors themselves note that DeepACO may underperform when not incorporating local search (LS) components. We acknowledge the limitation of compressing learned heuristics into an n $\times$ n matrix. It is a limitation inherited from ACO as well as other constructive (meta-)heuristics for COPs. Since we directly sample solutions on this matrix (probabilistic graph), addressing this limitation leads to constructing higher-quality COP solutions in one shot with O(n) complexity. It is an exciting direction for future work, and we present several possible avenues toward it in our response to Q1. > W2: Absence of Comparison to non-ACO/NCO Methods We appreciate the reviewer highlighting the opportunity for comparison with non-ACO/NCO methods. As this work focuses on enhancing ACO algorithms with NCO techniques, we believe comparisons within these two domains are most relevant and fair. We agree further comparisons would provide a more holistic evaluation, and have added comparisons with Guided Local Search in our Global Response, showing DeepACO's superior performance on TSP. > W3: Dependence on Machine Learning: demanding computational resources and training data. We believe the reasons below can help address your concern and make DeepACO widely accessible. - **Low computational demand.** DeepACO only requires **minutes** of training (Appendix B.2) and a lightweight model (~50k parameters) to provide substantial neural enhancement. Moreover, we observed that training DeepACO solely on a **CPU**, rather than utilizing a GPU, even accelerates the training process on our hardware. This expedited training is primarily because most of the computational time is spent on on-policy solution sampling, rather than on the forward and backward neural network propagation. - **Minimal reliance on (real-world) training data.** DeepACO requires only **a few hundred training instances** in most tasks to achieve significant neural enhancement (Appendix B.2). Furthermore, when obtaining a realistic data distribution is challenging, we can confidently train DeepACO on algorithmically generated (Appendix C) **inexhaustible synthetic data** and generalize it effectively. Fig. 5 shows that DeepACO trained on fixed-scale and uniform synthetic data can clearly outperform its ACO counterparts on variable-scale real-world benchmarks. > W4: Unclear Generalizability: More comprehensive testing across a diverse set of problems and datasets would provide stronger evidence of its generalizability. The generalizability of DeepACO is rooted in the generalizability of the ACO meta-heuristic and the feasibility to represent many COPs' solutions using binary variables. In our paper, we evaluated DeepACO on **26 datasets** (please also refer to Appendix B.1 for more details) spanning **8 diverse COPs**, including routing, assignment, scheduling, and subset COP types, which do not resemble one another. Furthermore, DeepACO can be extended to an even broader range of COPs (e.g., Karp's 21 NP-complete problems), as exemplified in our general response. --- ## Response to Questions > Q1: further insight into the "3D heuristic measures" or "dynamic heuristics" "3D heuristic measures" and "dynamic heuristics" are possible extensions of the current 2D heuristic matrix, allowing for compressing more learned heuristics of a COP. It is possible that we can realize such extensions in various ways, where two possibilities are described below. - One option is to assign each ant its unique heuristic matrix and collaboratively train the ant population using individual-specific "niching loss" (inspired by the niching methods in Evolutionary Computation). In this manner, we obtain 3D heuristic measures, allowing for cooperatively exploring multiple optima of complex COPs or the Pareto front of multi-objective COPs. - Another possible approach, i.e., dynamic heuristics, generates heuristic measures at various points either throughout the solution-constructing process or during ACO iterations. In the first scenario, heuristic measures can be generated based on partial solutions. In the second scenario, we can learn to adapt the heuristic measures according to the updated pheromone trails. > Q2: How do you see DeepACO being applied in real-world situations? What kind of problems do you envision it solving most effectively? The ACO metaheuristic already has broad applicability in real-world situations [1]. DeepACO takes it one step further by dispensing with the required expert knowledge and automatically enhancing its performance, thereby extending its applications to scenarios involving more complex and black-box COPs. We believe it is particularly competitive for problems with little or suboptimal expert knowledge, as well as when instances to solve follow similar distributions. [1] Dorigo, M., & Stützle, T. (2019). *Ant colony optimization: overview and recent advances* (pp. 311-351). Springer International Publishing. > Q3: the process and challenges of training the deep reinforcement learning (DRL) model To train a DRL model for ACO, we need to (1) build a simulation environment based on COP constraints and ACO algorithm, (2) determine the distribution for sampling synthetic instances or gather real-world data, (3) set the RL reward based on COP objective, and (4) code the RL algorithm. Overall, DeepACO is relatively easy to train. Most coding effort was spent on building the simulation environment for each problem to enable efficient parallel solution sampling. By comparison, less effort was spent on the DRL training part, which is generic for different problems. --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear Reviewer, We would appreciate if you would you be so kind as to acknowledge and respond to the authors' rebuttal. This is crucial to ensure the reviewing process is conducted adequately. AC --- Rebuttal Comment 1.2: Comment: The authors have done a good job to address my concerns. I have adjusted my score accordingly. --- Reply to Comment 1.2.1: Title: Thanks for reviewing Comment: We sincerely appreciate your thoughtful review and valuable feedback.
Summary: This article proposes DeepACO, which is a generic framework leveraging deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms. According to the experiments, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural model and a single set of hyperparameters. It also performs better than or competitively against the problem-specific methods on the canonical Travelling Salesman Problem. The article reviews the related works, explains ACO and the proposed methodology, and presents the experiments: their settings results and analysis. It is followed by supplementary materials. Strengths: It is a well-written paper and I enjoyed reading it. All important concepts seem to be clearly explained. The method was tested in 8 benchmark problems which is a huge strength. On all of them, DeepACO outperformed standard ACO methods. In general, the results of the experiments are very good. The limitations of the method are discussed as well and the authors declared that the code used in experiments will be made publicly available and I found it in supplementary materials, indeed. Weaknesses: For some reason, hyperlinks to the bibliography do not work in the PDF that I received. I didn't find information on how the parameters alpha and beta (control parameters) were set in experiments. The writing can be slightly improved: - p.7: "On the other hand" appears at the beginning of 2 consecutive sentences, so maybe in 1 of those sentences, it can be substituted. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Question to Fig. 6: does it make sense to combine all the extensions and use them at once? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer hKqX To begin with, we are encouraged that you enjoyed reading our paper and we sincerely appreciate your insightful feedback. Below, we provide our point-to-point response. > W1: Hyperlinks to the bibliography do not work. We sincerely regret any inconvenience caused by the hyperlink issue. We'll ensure the hyperlinks to the bibliography work in the final version. > W2: Missing information on how the parameters alpha and beta (control parameters) were set. Thank you for pointing this out. We've consistently set alpha and beta as 1 during our experiments, except when testing for hyperparameter robustness in Fig. 4. This setting will be incorporated into the final edit. > W3: writing can be slightly improved on p.7. Thank you for this suggestion. We will rephrase the sentence to avoid repetition. > Q: In Fig. 6, does it make sense to combine all the extensions and use them at once? Thank you for this insightful question. Yes, we can combine them since they are not mutually exclusive. However, combining all of them may not be more effective than implementing just one. This is because they are designed for a similar purpose, and a combination entails tuning more hyperparameters for effective training. --- Rebuttal Comment 1.1: Comment: Thanks for the information. I've read the rebuttal and don't have more questions. As for now, I keep my current rating. --- Reply to Comment 1.1.1: Title: Thanks for reviewing Comment: We sincerely appreciate your thoughtful review and valuable feedback.
Rebuttal 1: Rebuttal: # Global Response We are grateful to the reviewers for their insightful feedback and for recognizing the merit of our paper, e.g., novelty (Reviewer YjbH, BitH, d7BW), generalizability (Reviewer hKqX, YjbH, d7BW), effectiveness (Reviewer YjbH, BitH, d7BW), excellent presentation (Reviewer hKqX, d7BW). We have tried our best to address your primary concerns and will also rectify all minor issues raised. In our paper, we evaluated DeepACO on 26 datasets (please also refer to Appendix B.1) spanning 8 diverse COPs, including routing, assignment, scheduling, and subset COP types. To further demonstrate the generalizability and superiority of DeepACO, we perform additional experiments with more diverse tasks/scales and introduce more baselines. Before responding to each reviewer's specific comments, we present additional experimental results here. --- (*Review YjbH-W2*) The table below compares DeepACO with Guided Local Search, a non-ACO/NCO metaheuristic used in Google OR tools, and reported to be “generally the most efficient metaheuristic for vehicle routing [1]”. The results show DeepACO's superior performance on TSP. ||TSP100||TSP500||TSP1000|| |-|-|-|-|-|-|-| ||Obj.|Time|Obj.|Time|Obj.|Time| |Guided Local Search|7.83|10s|17.32|20s|24.29|40s| |DeepACO (ours)|7.76|1.2s|16.86|10s|23.85|32s| --- (*Review YjbH-W4*) The table below showcases that DeepACO can be extended to an even broader range of COPs. Specifically, we additionally tackle Bin Packing Problem (BPP), a grouping problem, aiming to optimally split items into groups. We follow the experimental setup described in [2] and demonstrate DeepACO’s consistent neural enhancement. T|1|5|10|20|30|40 -|-|-|-|-|-|- ACO $\uparrow$|0.877|0.896|0.902|0.907|0.909|0.910 DeepACO $\uparrow$|0.947|0.952|0.954|0.956|0.957|0.958 --- (*Reviewer BitH-W2*) The table below compares the performance of ACO, DeepACO trained on TSP100, and DeepACO trained on the respective test scale, all implementing vanilla LS (instead of NLS to ensure the same execution time for DeepACO and ACO). The results show that DeepACO still outperforms its ACO counterpart even with a significant distributional shift (i.e., from TSP100 to TSP1000). ||TSP500|TSP1000| |-|-|-| ACO (LS)|17.55|24.93| |DeepACO (LS, trained on **TSP100**)|17.18|24.69| |DeepACO (LS)|16.98|23.96| --- (*Reviewer d7BW-W1, BitH-W2*) The tables below gather the comparative experimental results with existing NCO methods for more diverse tasks, i.e., CVRP100, 400, 1000, 2000 and TSP100. They demonstrate DeepACO's consistent and competitive performance. The NLS strategy for CVRP is based on the SWAP* neighborhood [3]. | | CVRP100 | CVRP400 | CVRP1000 | CVRP2000 | | ----- | ----- | ----- | ----- | ----- | | AM [4] | 16.42(0.06s) | 29.33(0.20s) | 61.42(0.59s) | 114.36(1.87s) | | TAM-LKH3 [5] | 16.08(0.86s) | 25.93(1.35s) | 46.34(1.82s) | 64.78(5.63s) | | DeepACO(NLS,T=4) | **16.07**(2.97s) | **25.31**(3.65s) | **45.00**(10.21s) | **61.89**(14.53s) | | DeepACO(NLS,T=10) | **15.77**(3.87s) | **25.27**(5.89s) | **44.82**(15.87s) | **61.66**(35.94s) | | | TSP100 | | ----- | ----- | | AM [4] | 7.945(0.36s) | | GCN [6] | 7.907(6.13s) | | da Costa et al. [7] | 7.821(30.66s) | | Hudson et al. [8] | 7.815(10.11s) | | Att-GCRN+MCTS [9] | 7.764(0.53s) | | DeepACO(NLS,T=4) | 7.767(0.50s) | | DeepACO(NLS,T=10) | **7.763**(1.23s) | --- **References** [1] Google. Google or-tools. [2] Levine, J., & Ducatelle, F. (2004). Ant colony optimization and local search for bin packing and cutting stock problems. Journal of the Operational Research Society, 55(7), 705-716. [3] Vidal, T. (2022). Hybrid genetic search for the CVRP: Open-source implementation and SWAP* neighborhood. *Computers & Operations Research*, *140*, 105643. [4] Kool, W., van Hoof, H., & Welling, M. (2019). Attention, Learn to Solve Routing Problems! (arXiv:1803.08475). arXiv. [5] Hou, Q., Yang, J., Su, Y., Wang, X., & Deng, Y. (2023). Generalize Learned Heuristics to Solve Large-scale Vehicle Routing Problems in Real-time. The Eleventh International Conference on Learning Representations. [6] Joshi, C. K., Laurent, T., & Bresson, X. (2019). An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem (arXiv:1906.01227). arXiv. [7] Costa, P. R. d O., Rhuggenaath, J., Zhang, Y., & Akcay, A. (2020). Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning. Proceedings of The 12th Asian Conference on Machine Learning, 465–480. [8] Hudson, B., Li, Q., Malencia, M., & Prorok, A. (2022). Graph Neural Network Guided Local Search for the Traveling Salesperson Problem (arXiv:2110.05291). arXiv. [9] Fu, Z. H., Qiu, K. B., & Zha, H. (2021). Generalize a small pre-trained model to arbitrarily large TSP instances. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 8, pp. 7474-7482).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Distribution-Agnostic Generalized Category Discovery
Accept (poster)
Summary: This paper presents a novel real-world driven problem setting named long-tailed open-world classification (LT-OPC) setting, where a model should predict both closed-set and open-set classes with long-tailed distribution. To address such challenges, a Self-Balanced Co-Advice contrastive framework (BaCon) is proposed, where both contrastive learning branch and pseudo-labeling branch provide supervision for the LT-OPC task. First, the contrastive learning branch estimates the training distribution of a data to regularize the pseudo-labeling branch. In the pseudo-labeling phase, a debiasing and sampling phase is designed to resolve the data imbalance and insufficient supervision for the novel classes. In final, pseudo-label branch further provides supervision to the contrastive learning branch, in the form of pseudo-label-based soft contrastive learning objective. A series of experiments are carried out under the proposed setting and the effectiveness of each of the proposed component is studied. Strengths: - The proposed LT-OPC setting with long-tailed distribution data considers a more practical scenario in real world, compared to the traditional semi-supervised learning and generalized category discovery setting. - It seems that the contrastive learning strategy and pseudo-labeling strategy with interaction between them are well designed to deal with the proposed LT-OPC setting. The ablation results from the Table 6 can support its design. - Experimentally, the proposed BaCon framework shows competitive results from the baseline methods in imbalanced SSL and generalized category discovery setting. Weaknesses: - It is unclear that the proposed problem setting of LT-OPC is novel. There have been some published works that tried to combine semi-supervised learning and generalized category discovery setting in the context of open-world semi-supervised learning [C1, C2], but any discussions or comparisons with them are not provided in this paper. To my view, the only difference is whether the training data has long-tailed distributions or not, which can weaken the task novelty of this work. - It is unclear that the comparisons with imbalanced SSL methods are fair. To my understanding, the model trained with the proposed BaCon objective gets supervision for the novel classes from the unlabeled data with the pseudo-labeling term, while the baseline SSL methods do not. In that sense, it is natural that the proposed method shows higher accuracy in ‘New’ category classes. Any clarification or opinion from the authors is expected. In addition, for the imbalanced SSL baselines, both DARP and ABC do not contain contrastive branch. To make the comparison more fair, considering additional SSL baselines including contrastive learning branch is necessary [C3, C4]. - For the training the baseline methods, is the training setup identical to the procedure in the proposed BaCon method? Training BaCon model relies on transformer architecture with strong pretraining method of DINO. It is unclear that the training protocol for the other baseline methods is the same with BaCon for fair comparison. - Due to combining two different mechanisms, which are pseudo-labeling and contrastive learning, many hyper-parameters are introduced. How those values can be determined in a new dataset without simply tuning the real test set of each benchmark? Considering a real-world scenario, the prior for the novel class categories can be prohibited. To simplify the question, can a model tuned with a dataset including both known classes and novel classes (e.g., validation data) generalize to a new dataset including 'unseen' novel classes (e.g., test data)? Detailed discussion on such assumption would be appreceated. --- References [C1] Rizve, Mamshad Nayeem, et al. "Openldn: Learning to discover novel classes for open-world semi-supervised learning." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2022. [C2] Rizve, Mamshad Nayeem, Navid Kardan, and Mubarak Shah. "Towards realistic semi-supervised learning." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2022. [C3] Park, Jongjin, et al. "Opencos: Contrastive semi-supervised learning for handling open-set unlabeled data." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2022. [C4] Zheng, Mingkai, et al. "Simmatch: Semi-supervised learning with similarity matching." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - The notation of $\mathcal{Y}_{n}$ in line 113 is used without definition. - What exact type of loss function is utilized for the pseudo-labeling loss $\mathcal{L}_{u}$ from the equation 3? - It can be found that the codes are attached, but it is unclear the codes can be executable. If you decided to include the codes in the submission, making the codes more understandable would be appreciated. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Brief limitation of the work can be found, but potential negative societal impact has not been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Novelty of LT-OPC. Please kindly refer to 'The Novelty of the Proposed Setting' in the 'General Response' column at the top. > Discussions or comparisons with [C1, C2] are not provided. Thank you for the reminder! We have migrated the two methods [C1, C2] you mentioned to the proposed LT-OPC scenario, and we will include them as baseline methods in the final version. The results are shown in the rebuttal PDF file as Table A and Table B. It is worth noting that for a fair comparison, we have replaced the backbone model of both methods with the more powerful DINO pre-trained ViT, which is consistent with the backbone model of our method. Moreover, we will provide introductions to these two methods in the 'Related Works' section. In addition, the two methods share similarities with the GCD scenario, which involves the consideration of open-set class samples in unlabeled data within a semi-supervised setting. During testing, the model is required to classify both close-set and open-set samples. In Table 1 and lines 28 to line 44 of the main text, we provide a detailed comparison between the proposed LT-OPC and relevant scenarios such as GCD. We will incorporate the comparisons with these two methods in the respective location. > Model trained with BaCon gets supervision for the novel classes from the pseudo-labeling term, while the baseline SSL methods do not. Firstly, we would like to clarify that the dataset used for training the baseline SSL methods and our method are exactly the same, as shown in Figure 1(a) in the main text. The training set consists of labeled close-set samples, and unlabeled close-set and open-set samples. Secondly, we would like to claim that **the baseline SSL methods also utilize ‘the pseudo-labeling term’ during training** to obtain supervision for the unlabeled samples, which is the same as our method. Specifically, their methods first generate pseudo-labels for each sample, then filter out pseudo-labels with confidence scores lower than a pre-set threshold. The remaining pseudo-labels are then used as supervision for training. On the other hand, our method trains a pseudo-labeling branch to generate pseudo-labels, which in turn guides contrastive learning. In this process, **no additional supervision**, such as the true labels of the 'new' categories samples, is used. Therefore, we assume that our comparison with the baseline SSL methods is fair. > Considering additional SSL baselines including contrastive learning branch is necessary [C3, C4]. Thank you for your suggestion! We conducted experiments on the two methods, Opencos [C3] and Simmatch [C4], and reported the results in Table E. To ensure fairness, we re-implemented both methods using DINO pre-trained ViT as the backbone model. We will include the results in the final version. > It is unclear that the training protocol for the baseline methods is the same with BaCon. Please kindly refer to 'The Fairness of the Experiments' in the 'General Response' column at the top. > How hyper-parameters can be determined in a new dataset without simply tuning the real test set? To verify whether our method can be extended to 'unseen' novel classes, we conducted experiments on ImageNet. Specifically, we first randomly sample 50 categories as 'known' classes and keep these 50 classes consistent across all experiments. Then, we **non-repetitively** sample 6x50 categories from the remaining 950 categories, resulting in six groups of classes named 'novel_val' and 'novel_test_A/B/C/D/E'. Next, we select the optimal hyperparameters based on the performance on the {'known' + 'novel_val'} dataset. **By Keeping all the hyperparameters fixed**, we further train and evaluate the model on the five different test datasets. The experimental results shown in Table G, revealed that the hyperparameters performing best on the validation set achieved similar performance on different 'unseen' novel classes. This indicates that our method exhibits good generalization, implying its effectiveness in handling 'unseen' novel classes. We will report the experiment results in the final version. > y_n in line 113 is used without definition. We would like to thank the reviewer for a very detailed review. $\mathcal{Y}_n$ represents the set of labels for novel classes, which corresponds to the total number of categories in the open-set data. We will include this clarification in Section 3.1. > What exact type of loss function is utilized for L_u from Eq. 3? In Equation 3, the term $\mathcal{L}_u$ can be any unsupervised loss function used in classifier-based GCD methods. In our experiments, we choose two different $\mathcal{L}_u$ to demonstrate the effectiveness of our method. These are the cross-pseudo supervision loss from SimGCD (BaCon-S) and the pairwise objective from ORCA (BaCon-O). Specifically, the cross-pseudo supervision loss is a loss function similar to the idea of self-distillation and SwAV [1]. It uses the predictions of two views (images augmented differently) of the same image as pseudo-labels for each other and calculates the cross-entropy loss. On the other hand, the pairwise objective in ORCA encourages intra-class consistency by pulling together samples with similar representations. From the experimental results in Section 5, it's observed that our method can adapt to different $\mathcal{L}_u$ and achieve significant improvements, demonstrating the effectiveness and versatility of the proposed approach. > It is unclear the codes can be executable. We apologize for the inconvenience. Some of the code was not successfully synchronized from my GitHub repository to the anonymous repository. We have now resolved this issue and have attached a readme file as an explanation. Thanks for the reminder! [1] Caron M, et al. Unsupervised learning of visual features by contrasting cluster assignments. NIPS, 2020. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I acknowledge the authors' effort in the rebuttal. After carefully checking comments from other reviewers and the rebuttal, I would like to provide follow-up comments and questions. > Novelty of LT-OPC I am not fully convinced about the task novelty. While considering imbalance in data is practical, but it just feels to me a straightforward extension of GCD settings to long-tailed distributions. In addition, one can think that existing GCD methods such as SimGCD can cover such long-tailed distributions with clustering-based re-balancing techniques upon both closed and open categories, similar to the proposed work. > Model trained with BaCon gets supervision for the novel classes from the pseudo-labeling term, while the baseline SSL methods do not. This comment was not about either dataset itself or pseudo-labeling branch, but the category space for the supervision between the proposed work and the other SSL works. To my understanding, the estimated distribution $\pi_c$ from k-means clustering (Eqn 2 of the main paper) includes both closed and open-set classes as category space. Due to this, the loss terms from equations 2, 6, and 7 can contain supervision signals for open-set classes. On the other hand, SSL methods with only pseudo-labeling branch get supervision signals only from closed-set categories. In that sense, I think it is natural that the proposed method shows much higher accuracy on novel categories. If I am wrong, please let me know. For the other comments in rebuttal, additional experiments on fair settings (i.e., pretraining and backbone) and tuning hyper-parameters on 'valid' novel classes are encouraging. In summary, I would raise my score recommendation by one (3 -> 4) for its promising results. --- Reply to Comment 1.1.1: Title: Further Reply to Reviewer 7Jpa (1/2) Comment: > I am not fully convinced about the task novelty. While considering imbalance in data is practical, but it just feels to me a straightforward extension of GCD settings to long-tailed distributions. Indeed, it could be seen as a straightforward extension of the GCD setting, but it's a much more challenging scenario to consider long-tail distribution in an open world. More importantly, it cannot be effectively addressed by existing approaches in long-tail learning because: (1) most supervised or semi-supervised long-tail learning methods [1, 2, 3] require the prior training set (long-tailed) distribution to tackle the data imbalance issue, and it prevents them from being directly extended to resolve the proposed LT-OPC task, since the prior dataset distribution is unavailable in LT-OPC (line 56 - line 58). (2) though several self-supervised methods [4, 5] could alleviate the imbalance issue without knowing the dataset distribution, they discard all the accessible label information and bring marginal accuracy gain. As shown in Table C (please kindly refer to the rebuttal PDF file), we combine two best-performing baseline methods GCD and SimGCD with BCL [5], which is one of the latest self-supervised long-tail learning methods. It's observed that BCL cannot effectively tackle the data imbalance issue, while the proposed BaCon improves both accuracy and balancedness by a large margin via dynamic distribution estimation (Section 4.1) and the self-balanced knowledge transfer module (Section 4.2). > In addition, one can think that existing GCD methods such as SimGCD can cover such long-tailed distributions with clustering-based re-balancing techniques upon both closed and open categories, similar to the proposed work. In fact, we initially thought of modifying methods such as SimGCD to generalize it to our proposed LT-OPC scenario. However, our experimental results indicate that improving these methods via long-tail learning approaches cannot achieve satisfying performance. Specifically, we replace the uniform distribution prior (on both close and open categories) used in SimGCD to the **oracle distribution** of the long-tailed dataset in LT-OPC, which means the distribution of both labeled and unlabeled data, and it's not available in practice. Then, we evaluate the performance of SimGCD equipment with the oracle distribution, which can be regarded as the upper-bound performance for SimGCD, and the results are summarized below (in the second box). It's observed that the performance of SimGCD still has an inferior performance on LT-OPC, even though we have provided the oracle distribution to it for regularization, while the proposed BaCon outperforms them by a large margin. It indicates that simply modifying methods in GCD with re-balancing techniques cannot perform well in LT-OPC. While BaCon outperforms baselines by the designing of the dual branch structure which works collaboratively to provide interactive supervision and achieve self-rebalancing, also, the contrastive-learning branch takes advantage of the 'self-balanced knowledge transfer' module, which helps to learn a balanced and reasonable feature space. > This comment was not about either dataset itself or pseudo-labeling branch, but the category space for the supervision between the proposed work and the other SSL works. To my understanding, the estimated distribution from k-means clustering (Eqn 2 of the main paper) includes both closed and open-set classes as category space. Due to this, the loss terms from equations 2, 6, and 7 can contain supervision signals for open-set classes. On the other hand, SSL methods with only pseudo-labeling branch get supervision signals only from closed-set categories. In that sense, I think it is natural that the proposed method shows much higher accuracy on novel categories. If I am wrong, please let me know. Sorry for the misunderstanding about the question. Indeed, it's very crucial for SSL methods to get supervision signals from open-set samples. And we would like to clarify that we have provided the imbalanced SSL methods (i.e., ABC and DARP) with the oracle distribution in experiments, otherwise, they would have nearly zero accuracies on novel classes. We will add the description in the final version to avoid ambiguity. [1] Menon, Aditya Krishna, et al. "Long-tail learning via logit adjustment." ICLR 2021. [2] Kim, et al. "Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning." NIPS 2020. [3] Lee, et al. "ABC: Auxiliary balanced classifier for class-imbalanced semi-supervised learning." NIPS 2021. [4] Jiang, Ziyu, et al. "Self-damaging contrastive learning." ICML, 2021. [5] Zhou, Zhihan, et al. "Contrastive Learning with Boosted Memorization." ICML, 2022.
Summary: The paper proposes a combination of contrastive learning and semi-supervised learning approach to label images in open-world classification. The classes are assumed to be long-tailed, as is the case in real-world cases. Not all the classes have a labeled example, but the number of classes are assumed to be known beforehand. The proposed method follows a "two branch structure", which one branch doing contrastive learning, and another doing pseudo-labeling. Strengths: 1. A highly relevant problem statement for real-world application of image classification 2. Thorough comparison against related works, both in conceptual differences and in evaluation 3. Soft contrastive loss is an interesting idea Weaknesses: Some of the algorithmic steps are not clearly described. Questions listed below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Line 146: What do you mean sample number n? 2. Line 146: What does the symbol (n^c) mean? Are you doing "n choose c" or "n raised to c"? 3. How is Hungarian optimal assignment used in the context of the problem? How do you define "cost" in the adjacency matrix? The citation provided does not explain this. 4. What is the "align" operation in equation 2? 5. Why does equation 4 lead to a "post-hoc logits adjustment"? It is better to assume the reader has not read citation 39 and explain the concept briefly 6. How is equation 5 leading to "we prioritize selecting samples with higher prediction confidence in each class"? Line 179. 7. Line 183. How do you mitigate imbalance issue with this sample process? Even the ground truth labels are highly skewed. 8. What is A(i) in equation 1 and 6? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Good paper overall. I would like to see a clearer explanation of methods as listed above. The experiments where the number of classes are unknown are perhaps of higher importance, but they have been pushed to Appendix. It would be good to include a summary in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Line 146: What do you mean sample number n? The sample number **n** is a vector with size $C$, where the $i$-th element represents the number of samples in $i$-th cluster (line 144- line 145). > Line 146: What does the symbol (n^c) mean? Are you doing "n choose c" or "n raised to c"? $n^c$ refers to the $c$-th element in vector $\boldsymbol{n}$, which is the number of samples in the $c$-th cluster. We realize that there may be ambiguity here, so we will change $n^c$ to $n[c]$ in the final version. Thank you for your reminder. > How is Hungarian optimal assignment used in the context of the problem? How do you define "cost" in the adjacency matrix? The citation provided does not explain this. Assuming we want to calculate the accuracy of a classification task with a number of categories $C$, the cost matrix $W \in \mathbb{R} ^ {C \cdot C}$. In this case, $-w_{ij}$ is the number of samples with the highest logits value in the $i$-th dimension of the model classifier (i.e., predict to dimension $i$), and the true label should be the $j$-th class of samples. We take the minus sign because the Hungarian optimal assignment algorithm will **minimize** the cost, and we want to find an optimal assignment that **maximizes** the test accuracy. After defining such a cost matrix, we can use the Hungarian algorithm to obtain the optimal assignment, under which the model has the maximum test accuracy. We will add the explanation in the appendix in the final version. > What is the "align" operation in equation 2? The 'align' operation means finding the correspondence between the clusters and categories using the method described from line 146 to line 149. In a word, we first map the 'known' classes via the Hungarian algorithm performed on labeled samples, then we sort the remaining clusters by the cluster size and assign them sequentially to the novel classes. > Why does equation 4 lead to a "post-hoc logits adjustment"? It is better to assume the reader has not read citation 39 and explain the concept briefly Thanks for the advice! Equation 4 implements the Logits Adjustment [1] method, which is one of the SOTA methods in supervised long-tail learning, its core idea is to adjust the output logits at test time to mitigate the issue of data distribution differences between testing (balanced) and training (long-tailed). In other words, the adjustment term $-k \cdot \pi_e$ introduces a label-relevant offset, which amplifies the tail class probability according to the data distribution during training, leading to an unbiased probability estimation. We will add a brief explanation to Section 4.2. > How is equation 5 leading to "we prioritize selecting samples with higher prediction confidence in each class"? Line 179. In fact, we only use Equation 5 to determine the number of samples that should be sampled for each category. After determining the number of samples, we then rank each category of samples in descending order of confidence and sequentially select the sample with the highest confidence according to the number of samples needed. > Line 183. How do you mitigate imbalance issue with this sample process? Even the ground truth labels are highly skewed. The long-tailed distribution of the dataset can indeed bring difficulties to re-balancing. However, since the sampling rate we set is inversely proportional to the number of samples in that category, we can assume that the number of samples sampled in each category is roughly the same, which can still alleviate the problem of long-tailed distribution, i.e., the data for contrastive learning has a smaller imbalance ratio after incorporating the sampled images. > What is A(i) in equation 1 and 6? $A(i)$ is the set of indices of all other features (both positive and negative pairs) in contrastive learning. Take SimCLR [2] for instance, assume we train the model with a batch size B, and all samples are augmented twice in different augmentations ($2B$ views), then feed into the model to obtain $2B$ features $\left\\{z_i\right\\}_{i=1}^{2B}$. For feature $z_i$, $A(i)$ refers to the set of indices of all other $(2B-1)$ features in the training batch. We will add the clarification in the main text. > The experiments where the number of classes are unknown are perhaps of higher importance, but they have been pushed to Appendix. It would be good to include a summary in the main paper. Thanks for the advice! We will add a summary of the experiments in the Appendix in Section 5.1. [1] Menon, Aditya Krishna, et al. "Long-tail learning via logit adjustment." ICLR, 2021. [2] Ting Chen, et al. A Simple Framework for Contrastive Learning of Visual Representations. ICML, 2020. --- Rebuttal Comment 1.1: Title: Acknowledging Response Comment: Dear Authors, Thank you for your thoughtful responses. The questions I raised have been answered adequately. --- Reply to Comment 1.1.1: Title: Further Reply to Reviewer B9Lc Comment: Thank you very much for your valuable questions and suggestions! We sincerely appreciate them and we will make comprehensive revisions to our work based on your comments in order to further improve the quality of our work. Thanks again and best wishes!
Summary: This paper tackles long-tailed open-world classification problem. It handles data imbalance in the long-tailed problem, and it also needs to handle the closed-world/open-world classifications. For open-world classifications, there are only unlabeled data. Moreover, it assumes that unlabeled data can come from the "closed-world"classes as well. It proposes a framework, called "Self-Balanced Co-Advice contrastive framework (BaCon)", to tackle data imbalance and handle the open-set classifications. The framework consists of a contrastive-learning branch and a pseudo-labeling branch, and the two branches also work collaboratively. Experiments are performed on image classification datasets. Strengths: The proposed problem of "long-tailed open-world classification" is valid. Both "long-tailed" and "open-world" are popular and important problems in vision. This paper combines problems and approaches from a couple of previous works. The proposed approaches may work. Weaknesses: The proposed problem setting is not very novel. For example, the previous work of "Large-scale long-tailed recognition in an open world" (reference [36]) handles long-tailed problems in an open world. Also the paper "Generalized Category Discovery" (reference [52]) proposes "the unlabelled images may come from labelled classes or from novel ones." The proposed approach is not novel. For example, the contrastive learning idea has been used in the "Generalized Category Discovery" (reference [52]). Or the key insights need to be clearly stated. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The experiments are done on CIFAR, ImageNet, and Places datasets. Are there any experiments on other fine-grained long-tailed datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The proposed problem setting is not very novel. For example, the previous work of "Large-scale long-tailed recognition in an open world" (reference [36]) handles long-tailed problems in an open world. Also the paper "Generalized Category Discovery" (reference [52]) proposes "the unlabelled images may come from labelled classes or from novel ones." GCD [1] and OLTR [2], are indeed related to the proposed LT-OPC setting, but we would like to clarify that the proposed problem setting still has significant differences from them and introduces unique challenges in this scenario. **Compared to GCD,** they only focus on the problem of the open world but ignore the data imbalance issue in the real world, i.e., their method falls in the lower right corner of Figure 1 (b); while our setting aims to solve both the problems of the open-world and data imbalance, corresponding to the upper right corner of Figure 1 (b). Moreover, since we do not know the prior distribution of the dataset in open-world scenarios, we cannot simply transfer existing long-tail learning methods to solve the problem of data imbalance in LT-OPC scenarios, making it more challenging. **Compared to OLTR,** though we both study the long-tail problem in open-world scenarios, our method pushes 'one step further'. That is, not only learn a model to detect the open-set samples, but it also requires the model to classify them according to their semantics. On the other, OLTR considers a fully-supervised scenario that only utilizes manually annotated samples (lower left in Figure 1 (c)); while we also make use of unlabeled (close-set and open-set) data (upper right and lower right in Figure 1 (c)), which makes our approach more powerful and generalizable. > The proposed approach is not novel. For example, the contrastive learning idea has been used in the "Generalized Category Discovery" (reference [52]). Or the key insights need to be clearly stated. Our **key insight** is methods like GCD [1] lack effective supervision of novel categories, since they only perform self-supervised contrastive loss on samples in novel classes, which leads to inferior performance. Meanwhile, GCD overlooks the long-tail distribution in the real world. '***How to simultaneously tackle the issue of data imbalance and provide adequate supervision for open-set classes***' motivates our proposed method BaCon, where the **key technical novelty** is the design of a dual branch structure and the co-advice mechanism which lets the two branches work collaboratively to provide interactive supervision to tackle the imbalanced recognition task under open-world scenarios. To be specific, the contrastive-learning branch provides distribution estimation to regularize the prediction results of the linear classifier for better pseudo-labeling. On the other, the generated pseudo-labels are sampled and debiased to re-balance and provide additional supervision for the contrastive-learning branch. To fuse the knowledge of the two branches and learn a better feature space, we design a novel pseudo-label-based contrastive loss that clusters samples based on their *positiveness* scores. Thanks for the advice! We will highlight our key insight and key technical novelty in the final version. > The experiments are done on CIFAR, ImageNet, and Places datasets. Are there any experiments on other fine-grained long-tailed datasets? In addition to conducting experiments on CIFAR-10-LT, CIFAR-100-LT, ImageNet-100-LT, and Places-365-LT as described in the Experiments section of the main text, we also present the experimental results of our method on ImageNet-1k-LT and iNaturalist [3] in Appendix E.5, and the iNaturalist dataset is highly imbalanced which is consists of 8,142 categories. Furthermore, we also compare the performance of our method with other baseline methods on the Herbarium-19 [4] dataset in Table F (please kindly refer to the rebuttal PDF file). We believe that these experiments and comparisons serve to demonstrate the effectiveness of our method. [1] Sagar Vaze, et al. Generalized category discovery. CVPR, 2022. [2] Ziwei Liu, et al. Large-scale long-tailed recognition in an open world. CVPR, 2019. [3] G. Van Horn, et al. The inaturalist species classification and detection dataset. CVPR, 2018. [4] Kiat Chuan Tan, et al. The herbarium challenge 2019 dataset. In Workshop on Fine-Grained Visual Categorization, 2019. --- Rebuttal 2: Title: Comment by Authors Comment: We deeply appreciate your time and the thoughtful insights you've shared! Based on your comments, we have undertaken a comprehensive revision of our work (please refer to the rebuttal section for details). We hope we have addressed the concerns you raised regarding the originality of the task and approach, as well as the experiments across various datasets. As the discussion phase is nearing its conclusion, we would be grateful if you would inform us of any remaining concerns or questions you might have. We are more than willing to provide further assistance in clarifying any issues. Once again, we extend our heartfelt gratitude for your time and the invaluable suggestions you've provided!
Summary: This paper studies the long-tailed recognition in the presence of open-set samples, which is never studied by previous works. Moreover, existing long-tailed learning methods can not be directly extended to open-set classification. To solve this problem, the authors design a new method termed BaCon, which utilizes contrastive learning to help estimate the label distribution and generate pseudo-labels for the unlabeled data, thus relieving the data imbalance and providing more data supervision. Besides, a pseudo-label-based contrastive loss is proposed to cluster similar samples for better open-set classification. The authors conduct experiments on multiple long-tailed datasets including CIFAR10/100-LT, ImageNet100-LT, and Places-LT. The results show that BaCon can obviously improve performance, especially when the imbalance ratio is high. Strengths: 1. This paper firstly studies the long-tailed recognition with unlabeled data in both open-set and close-set samples. This studied problem is more complex and the performances of most previous methods are not satisfactory, indicating the necessity of proposing a corresponding method. 2. The authors propose a new method for long-tailed open-set classification. The experimental results demonstrate the superiority of the proposed method compared to most existing methods. 3. The authors conduct several ablation studies to verify the effectiveness of each component. Weaknesses: 1. The studied problem is novel. However, the motivation is insufficient. By simply integrating open-set classification methods with re-balancing strategies, the long-tailed problem may be alleviated. The authors should also compare with the simply integrated baseline methods, or explain the difficulty of the integration. 2. The pseudo-labeling module is important since it can affect the additional supervision and the pseudo-label-based contrastive loss. So how to ensure the quality of the pseudo-label? What if the pseudo-label is incorrectly assigned? To verify the quality of the pseudo-label, I suggest the authors calculate the accuracy of the pseudo-labels for both head and tail classes and report the results. 3. Some typos. The title of Section 5 should be "Experiments". In Table 4, GCD performs better in Many and Med shots, but the results are not bolded. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See "Weaknesses". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have discussed the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The studied problem is novel. However, the motivation is insufficient. By simply integrating open-set classification methods with re-balancing strategies, the long-tailed problem may be alleviated. The authors should also compare with the simply integrated baseline methods, or explain the difficulty of the integration. Thanks for the advice! We would like to claim that most supervised or semi-supervised long-tail learning methods [1, 2, 3] require the prior training set (long-tailed) distribution to tackle the data imbalance issue, and it prevents them from being directly extended to resolve the proposed LT-OPC task, since the prior dataset distribution is unavailable in LT-OPC (line 56 - line 58). Meanwhile, though several self-supervised methods [4, 5] could alleviate the imbalance issue without knowing the dataset distribution, they discard all the accessible label information and bring marginal accuracy gain. As shown in Table C (please kindly refer to the rebuttal PDF file), we combine two best-performing baseline methods GCD and SimGCD with BCL [5], which is one of the latest self-supervised long-tail learning methods. It's observed that BCL can not effectively tackle the data imbalance issue, while the proposed BaCon improves both accuracy and balancedness by a large margin via dynamic distribution estimation (Section 4.1) and the self-balanced knowledge transfer module (Section 4.2). > The pseudo-labeling module is important since it can affect the additional supervision and the pseudo-label-based contrastive loss. So how to ensure the quality of the pseudo-label? What if the pseudo-label is incorrectly assigned? To verify the quality of the pseudo-label, I suggest the authors calculate the accuracy of the pseudo-labels for both head and tail classes and report the results. Thanks for the advice! We report the test accuracy of the pseudo-labeling branch during training in Table D (please kindly refer to the rebuttal PDF file). Furthermore, we have two elaborately designed components in our method to ensure the quality of the pseudo-label-based contrastive loss. Firstly, to mitigate the impact of the long-tail distribution on pseudo-labels, we designed a debiasing module to post-hoc adjust the logits output from the pseudo-labeling branch (introduced in Section 4.2). Secondly, when calculating the *positiveness* coefficient $w_{ij}$ using the rectified pseudo-labels, we utilized the soft labels of each sample instead of hard labels (i.e., converting labels to one-hot labels based on the class with the highest logits). This design helps alleviate the influence of erroneous pseudo-labels. Empirical evidence supporting the effectiveness of these two components can be found in Table 6 (b) and (c). > Some typos. The title of Section 5 should be "Experiments". In Table 4, GCD performs better in Many and Med shots, but the results are not bolded. We would like to thank the reviewer for a very detailed review! We will fix the typos. [1] Menon, Aditya Krishna, et al. "Long-tail learning via logit adjustment." ICLR 2021. [2] Kim, et al. "Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning." NIPS 2020. [3] Lee, et al. "ABC: Auxiliary balanced classifier for class-imbalanced semi-supervised learning." NIPS 2021. [4] Jiang, Ziyu, et al. "Self-damaging contrastive learning." ICML, 2021. [5] Zhou, Zhihan, et al. "Contrastive Learning with Boosted Memorization." ICML, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. After reading your rebuttal and the other reviewers' comments, my concerns have been addressed to some extent. I would like to maintain my score. --- Reply to Comment 1.1.1: Title: Further Reply to Reviewer kmyS Comment: Thank you for your feedback! If you have any other questions or concerns, please feel free to reach out to me at your convenience. Once again, I sincerely appreciate your time and valuable suggestions!
Rebuttal 1: Rebuttal: # General Response ## To All Reviewers. Dear Reviewers: We would like to thank you for your time and insightful comments! We have carefully read your review comments and conducted additional experiments as required to answer the questions (please kindly refer to the rebuttal PDF file and the rebuttal reply to each reviewer below). We hope we have addressed your concerns. We would be grateful if you would kindly let us know of any other concerns and if we could further assist in clarifying any other issues. Thanks a lot again, and with sincerest best wishes Authors ## Explanation of the Rebuttal PDF. In the rebuttal PDF file, we indicate the best performance among **all** methods with **bold numbers** in all tables (consistent with the submitted version), and we will use underlined numbers to represent the best performance among the **baseline** methods. We believe this will help highlight the differences between our method and the baseline methods. ## The Novelty of the Proposed Setting. GCD [1] and OLTR [2], are closely related to the proposed LT-OPC setting, but we would like to clarify that the proposed problem setting still has significant differences from them and introduces unique challenges in this scenario. **Compared to GCD,** they only focus on the problem of the open world but ignore the data imbalance issue in the real world, i.e., their method falls in the lower right corner of Figure 1 (b); while our setting aims to solve both the problems of the open-world and data imbalance, corresponding to the upper right corner of Figure 1 (b). Moreover, since we do not know the prior distribution of the dataset in open-world scenarios, we cannot simply transfer existing long-tail learning methods to solve the problem of data imbalance in LT-OPC scenarios, making it more challenging. **Compared to OLTR,** though we both study the long-tail problem in open-world scenarios, our method pushes 'one step further'. That is, not only learn a model to detect the open-set samples, but it also requires the model to classify them according to their semantics. On the other, OLTR considers a fully-supervised scenario that only utilizes manually annotated samples (lower left in Figure 1 (c)); while we also make use of unlabeled (close-set and open-set) data (upper right and lower right in Figure 1 (c)), which makes our approach more powerful and generalizable. The experimental results empirically validate the highly challenging nature of the proposed setting. GCD methods as well as long-tailed SSL methods, all exhibit significant performance drops under the newly proposed setting, as shown in Table 2 in the main text, and Table A, Table B in the rebuttal PDF file. Additionally, another significance of our work is that existing research on long-tailed learning and open-set learning has developed independently, and very few previous works attempt to combine both of these areas. However, data imbalance and open-ended distribution are inherently intertwined with each other in the real visual world, which renders existing methods ineffective in terms of deployment. The proposed LT-OPC addresses both of these problems simultaneously, which can facilitate the deployment of existing methods in practical application scenarios. ## The Fairness of the Experiments. In the submitted version, we have kept the original settings of the baseline methods. Namely, ABC, DARP, Opencon, and ORCA, which use the ResNet backbone model as they implement in their papers. On the other hand, GCD, SimGCD, and our method utilize the DINO pre-trained ViT network, which may lead to an unfair comparison. To address this issue, **we have replaced the backbone networks of all baseline methods with the DINO pre-trained ViT, consistent with the proposed method**. Furthermore, we have ensured that all methods are trained for 200 epochs, with an equal amount of data used per epoch. Table A and Table B in the rebuttal PDF file present the modified versions of Table 3 and Table 4 in the main text, respectively, where $\dagger$ denotes adapted methods. It can be observed that the baseline methods benefit from the powerful feature extraction capability of the DINO pre-trained ViT, achieving higher accuracy than using ResNet as the backbone on all datasets. At the same time, our method still demonstrates significant performance improvement. In the final version, we will replace the original tables and report the experimental results of all methods when using the DINO pre-trained ViT as the backbone. [1] Sagar Vaze, et al. Generalized Category Discovery. CVPR, 2022. [2] Ziwei Liu, et al. Large-scale long-tailed recognition in an open world. CVPR, 2019. Pdf: /pdf/5413f97cd9fa6d76ab57264ebe3e3270f6d9357d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces and provides a formal definition for a real-world task, the long-tailed open-world classification (LT-OPC): it entails generating predictions for old and novel classes within a long-tailed open-world context. The proposed method incorporates a contrastive-learning branch and a pseudo-labeling branch to offer mutual guidance. Specifically, the contrastive-learning branch ensures reliable distribution estimates to standardize the pseudo-labeling branch's predictions. This subsequently directs contrastive learning via self-balanced knowledge transfer via a novel soft contrastive loss. The paper shows its effectiveness compared to methodologies from related fields of imbalanced semi-supervised learning and generalized category discovery. Strengths: - Clarity: This paper is well-structured and clearly presents the motivation and method. - Significance: The problem of long-tailed open-world classification is a significant real-world problem that this paper strives to define and address. - Originality: The paper introduces an intriguing concept, "soft contrastive learning," which makes it possible to benefit from pseudolabeling and soft labels for contrastive learning. The pseudolabeling branch provides the contrastive branch with classification signals, while the contrastive branch standardizes the distribution of pseudolabeling branch. - Quality: The supplementary material and experiments offer a detailed analysis, enhancing the paper's value. Weaknesses: - It is not a weakness, but since the OLTR work already exists, it is better to use a more distinct name for the problem the paper addresses. Especially since this problem is more in line with generalized category discovery, a different name for the problem seems less confusing. - Ablations studies could be more well explored, and the explanation of the trends seen in Table 6 can be improved and explain these trends with more depth. - Calling theorem 1 a theorem does not seem mathematically correct. It is solving a constrained optimization problem specific to the method. - Also, in the proof of theorem 1 in supplemental, is it correct to omit $C$ for a more rigorous consideration because it depends on $w_{ij}$, which depends on $z_i$ and $z_j$s? Results for other methods are by applying code of other methods to the new datasets. Since methods like ORCA on cell dataset, GCD and SimGCD on Herbarium have a decent performance. Since Herbarium or cell datasets are long-tailed, comparing these methods on the same datasets and report numbers from their paper seems a more fair comparison. - Although the number of experiments is enough, the numbers have not been explained and reasoned about. Since numbers for different methods have drastic changes, a section to analyze these numbers (at least in supplemental) seems necessary. - Minor: there are a few mistakes in the tables: (Table 4/known/Many/GCD: 75.9) (Table4/known/Med/SimGCD: 72.8) (Table5/CIFAR10-LT $\rho =150$/ OpenCon: 83.2) (Table5/CIFAR100-LT$\rho =150$/ BaCon-O: 66.8) should be bold Also, indicating the second-best method or improvement can be beneficial for the reader to see the efficacy of the paper’s contributions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1- In Table 4, the Std for ABC, DARP, and other methods are strangely high. Is there any reason for this? Also, why ORCA performs poorly in comparison to other methods? Since the results in their paper are reasonable, there might be something unfair to their method, especially in Table 4. Having 0.3 for "few" classes looks pretty odd compared to the highest number, 65.3. The same goes for SimGCD, which should have close results to GCD because of their high similarity. 2- There is a large gap between the paper's results and the next one in cifar100-LT, especially for new classes, and also, the model has a higher accuracy for new categories than the old ones, which seems quite strange. Is there any particular intuitive reason for this? 3- The reason behind the paper outperforming other methods with this high margin has yet to be explored and explained well. It is necessary to explain why methods behave the way the paper has reported for each dataset. Also, numbers for each dataset have drastic variances between different methods. It is worthwhile to investigate and explain the reasons for these differences so that they become more comprehensible. 4- While it is interesting to consider common unseen categories, as in Figure 1, however, the real-world data usually has the long-tailed categories as unseen. With this new way of considering the open world, a model that assigns most novel samples to the most common novel category or, in other words, detects the open world samples and then assigns them to the most common category will triumph over a model which considers the unknown categories all in long-tailed. For instance, ORCA has this uniform objective loss, which is in direct contrast with the way data has been assigned (maybe this is the reason for ORCA's poor performance in the tables), while for the real world, the more traditional long-tailed split is in fact more plausible. Although Data preparation is explained in the paper, is it similar to how Figure 1 has depicted? Are there any experiments to also compare methods against traditional long-tailed versions of datasets? Also, a base method that only detects open-world samples and assigns them to the most common novel categories might help to decipher how much of this assignment is just because of the model's knowledge of common novel categories. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations of the work have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > A different name for the problem seems less confusing. Thanks for the advice! We will change the name to 'Imbalanced Open-World Classification'. > Ablations studies could be more well explored. Thanks for the advice! We will add a more detailed analysis in the ablation. > Calling theorem 1 a theorem does not correct. Thanks for the advice! We will change 'Theorem 1' to 'Claim 1'. > Is the proof of theorem 1 correct to omit C? Thank you for your reminder! We will modify the $C$ in Eq. 2 (Appendix) to $C_i$, as $C$ may vary among different samples. But we believe that this should not affect subsequent proofs, because, for a training batch, its *positiveness* score matrix is obtained by calculating the similarity of the rectified class probability distribution between each sample pair $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$ (line 190- line 192), so each element is a fixed value that does not affect the subsequent differentiation process and can be regarded as a constant. > Results on Herbarium or Single-Cell. Thanks for the advice! We compare the proposed BaCon with other methods on Herbarium-19 in Table F in the rebuttal pdf. Our method outperforms baselines on open-set ('New' split) classes and overall accuracy ('All' split) by a large margin. The authors of ORCA didn't provide the source code for training methods on the Single-Cell dataset, which is a dataset in the field of biology and may need some unique preprocessing or different hyper-parameters. We have emailed the authors to ask for the source code, and we will conduct the experiments on Single-Cell when we get the code from the authors. > The results have not been fully explained. Thanks for the advice! We will add the explanation below: In Table A (in the rebuttal PDF), we compared the proposed BaCon with two SOTA imbalance-SSL methods, ABC and DARP, as well as the six latest open-world recognition methods. Among them, GCD and OpenCon are contrastive-based methods, while ORCA, SimGCD, OpenLDN, and TRSSL are classifier-based methods. **For the imbalanced-SSL methods,** thanks to their carefully designed semi-supervised learning approaches for long-tail data, they achieve decent performance on known (close-set) classes. However, they perform poorly on novel (open-set) classes because the imbalance-SSL scenario does not consider the presence of open-set samples in the unlabeled data. **Regarding the open-world recognition methods,** we found that contrastive-based methods outperform classifier-based methods in the LT-OPC scenario. Reasonable explanations can be found in [1], which indicate that self-supervised contrastive training, is more robust to class imbalance than supervised methods. Nevertheless, existing contrastive-based methods struggle to optimize the feature space of unlabeled samples, e.g., GCD only uses a self-supervised contrastive loss for unlabeled data. Moreover, GCD methods lack tailored designs for imbalanced datasets, leading to significant performance degradation when the training set has a long-tailed distribution (as shown in Table 2 in the main text). **For the proposed BaCon,** we design a dual branch structure that works collaboratively to provide interactive supervision and achieve self-rebalancing. The pseudo-labeling branch is enhanced by the proposed 'dynamic distribution estimation' algorithm for regularizing the predictions, while the contrastive-learning branch takes advantage of the 'self-balanced knowledge transfer' module, which helps to learn a balanced and reasonable feature space. It outperforms baseline methods by tackling the imbalanced recognition task and the open-world challenge at the same time. > Minors in tables. We would like to thank the reviewer for a very detailed review. We will fix the mistakes in the experiment tables. > There might be something unfair in experiments. Please kindly refer to 'The Fairness of the Experiments' in the 'General Response' column at the top. > In Table 4, the Std for some methods are strangely high. As you mentioned, it's observed that classifier-based methods have a relatively large Std than contrastive-based methods (GCD and OpenCon). The phenomenon is aligned with [1], which suggests that self-supervised contrastive training, is more robust to class imbalance than supervised methods. We will add the explanations in the final version. > Although Data preparation is explained in the paper, is it similar to how Figure 1 has depicted? Are there any experiments to compare methods against traditional long-tailed versions of datasets? In our paper, we assume that both close-set and open-set categories follow a long-tailed distribution, meaning that both parts of the data contain both common and rare classes. For example, in Figure 1 (a), we assume classes 1-5 are close-set categories and 6-10 are open-set categories, with roughly the same distribution for the two parts. The core idea of setting the problem in this way is that when manually collecting data, we can consider it as randomly sampling some categories from a large (including plenty of classes) real-world long-tailed distribution as the close-set. Similarly, the open-set categories can also be considered as a sampling process in the large long-tailed distribution. For instance, we consider training a species classification model where the close-set data we collect covers species in the forest, while open-set classes can be species on plains or marine organisms. In this case, both closed-set and open-set samples are long-tailed. In the experiment, we compared BaCon with the semi-supervised long-tail learning methods ABC and DARP. On the other, due to we can't obtain a prior distribution of the training set in LT-OPC, we are unable to compare the proposed method with supervised long-tail methods. [1] Hong Liu, et al. Self-supervised learning is more robust to dataset imbalance. ICLR, 2021. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing additional experiments and explanations. Most of my concerns are addressed, and here I mention a few remaining suggestions (or discussions). **Problem Name** 'Imbalanced Open-World Classification' is still ambiguous, and it can be mistaken by detecting open-world samples drawn from a long-tailed distribution. Since the method overlaps with generalized category discovery, it seems more in line with the literature if it makes it clear that it is generalized category discovery for a scenario that both novel and seen categories obey long-tailed. Because by looking at this name, the problem should be distinguishable from similar scenarios. Also, I think since the paper's version of long-tailed is different from the traditional long-tailed, introducing a term that also distinguishes this different category distribution can aid future works in delineating which problem they are addressing, traditional long-tailed or this double long-tailed for novel and seen categories. **Ablations** It will be more convincing if it is mentioned how the ablations will be explored because it is unclear whether it will be enough. (For instance, in comments) **Fairness of the method** What I meant by the fairness of the method is that in the paper, it is assumed that both novel and seen scenarios obey Pareto distribution. In contrast, other methods do not have this assumption. It gives an unfair advantage to the method. Although I believe the current long-tailed works that consider the novel categories to be in the tail are also unrealistic, but comparing these methods for the same long-tailed definition they used can show how much of the method's ability is due to the Pareto distribution prior knowledge. For instance, Herbarium results for GCD are much lower than their paper, so it will be fair if the result of this method is also compared on the same distribution they've used so that the robustness of the model is tested. **Theorem 1** I still think $C_i$ also can not be discarded as easily as mentioned. However, since its name has changed from theorem 1 to claim 1, it can be less rigorous, so I consider it resolved. **Results Analysis** Although the problem that the paper has addressed is novel since its novelty is limited, the experimental part and effect of the method itself can benefit from more explanation. I appreciate that the authors addressed the observed trends in results generally, and due to limited space, having an in-depth analysis in the rebuttal is not feasible. But I strongly recommend comprehensively analyzing significant gaps or stds in each table in the revised paper since the paper has a rich set of experiments; it is worthwhile to provide some insights for the reader to understand the purpose of each experiment, how each specific dataset affects the results and so on. It is also beneficial to mention which branch of the method addresses the weakness of each specific set of previous works (open-world recognition and Imbalanced SSL). But in general, I appreciate the provided explanations in the rebuttal, and it provides some insights about why the other methods fail. It is yet to be mentioned *how* the proposed method resolves previous works' shortcomings. It can be speculated which part of the framework addresses each prior work's weakness. However, it will be reassuring if the authors discuss this in the paper (or supplemental). The following paper also might provide some insights about the proposed problem, so it will be helpful to consider mentioning the differences. Can your method be applicable when novel category distribution is arbitrary? I think the branch of your method that makes it robust to category frequencies might also make it distribution-agnostic, similar to [a] scenario (but for generalized category discovery). [a] Yang, Muli, et al. "Bootstrap Your Own Prior: Towards Distribution-Agnostic Novel Class Discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. --- Reply to Comment 1.1.1: Title: Further Reply to Reviewer JDMB (1/3) Comment: > Problem Name Thanks for the advice! Inspired by [a], we will change the name of the setting to 'Distribution-Agnostic Generalized Category Discovery' to better fit our problem setting and reduce ambiguity. > It will be more convincing if it is mentioned how the ablations will be explored because it is unclear whether it will be enough. **The effectiveness of ${\mathcal{L}}_{reg}$** Recall in Section 4.1, we propose to regularize the predictions of the pseudo-labeling branch by the estimated train-set distribution. In Table 6a, we show the performance of the pseudo-labeling branch on ImageNet-100-LT with different estimation strategies. 'Oracle' denotes we use the true distribution $\boldsymbol{\pi}$ of $\mathcal{D}$ (unknown in practice) as the target distribution in Eq. 2, and it serves as an upper bound of the performance. Compared to previous works (ORCA and SimGCD) that use a balance prior, regularizing the predictions with oracle long-tailed distribution significantly improves the performance on both known and novel categories, showing the importance of the distribution estimation process. Meanwhile, the similar results achieved by our estimation strategy imply $\boldsymbol{\pi_e}$ could be a reliable proxy to $\boldsymbol{\pi}$. Furthermore, we investigate whether two alternative estimation strategies could help the pseudo-labeling branch: 1), only regularize known classes prediction with $\boldsymbol{\pi}_{\mathcal{D}^l}$ 2), perform k-means clustering on the feature of the pseudo-labeling branch, and they all results in inferior accuracy and could in turn deteriorate the contrastive learning process. In conclusion, the results indicate that an accurate estimation of the training set distribution is crucial for classifier-based methods (e.g., ORCA and SimGCD) to have decent performance when generalizing to LT-OPC. **The effectiveness of sampling & debiasing** In Section 4.2, we suggest adjusting the prediction logits according to the estimated distribution for debiasing and sampling unlabeled instances for re-balancing. We adopt step-by-step ablation experiments to the proposed approaches in Table 6b. 'Baseline' refers to not using the proposed pseudo-label-based contrastive loss, and it leads to inferior performance on novel classes since they only get supervision from the self-supervised contrastive loss. 'Vanilla' means incorporates the designed pseudo-label-based loss into the optimization objective with Eq. 7 without the debiasing and sampling module, i.e., calculate the designed loss directly without performing any preprocessing on pseudo labels. It brings significant performance gain on novel classes (~10%) due to the leverage of additional supervision in pseudo labels. On the other, known classes also benefit from the 'knowledge distillation' process, which implies the probability distribution information from the pseudo-labeling branch is complementary to the one-hot label. While we observe both 'debiasing' and 'sampling' modules could further bring performance gains from the results, and by combining the two techniques together (as described in Section 4.2), we achieve higher test accuracy on both known and novel classes. **Definition of the pseudo-label based contrastive loss** In Section 4.3, we design a novel *soft* contrastive loss based on pseudo-labels to transfer the knowledge of $f_{cls}$ into $f_{con}$. As an opposite, we could also construct the loss in a *hard* manner where we formulate the positive pairs on top of the prediction class with the largest logit and further perform the supervised contrastive loss. Intuitively, the *hard* design discards the probability distribution information and is more susceptible to the potential false pseudo-labels, while the *soft* contrastive loss utilized in our method could help alleviate the influence of erroneous pseudo-labels. The empirical results also support the intuition, as shown in Table 6c, the proposed $\mathcal{L}_{CL}^{s}$ (soft) outperforms the supervised CL loss (hard) by a large margin. This phenomenon is also observed in knowledge distillation [1] that transferring knowledge by using soft labels rather than one-hot predictions achieves better performance. [1] Hinton, et al. "Distilling the knowledge in a neural network." arXiv (2015).
null
null
null
null
null
null
Empowering Convolutional Neural Nets with MetaSin Activation
Accept (poster)
Summary: The paper proposes MetaSin, a new activation function for deep learning. MetaSin essentially consist of a relu function plus a sum of parametrized sin activations functions. The MetaSin function is developed specifically to work in the domain of image prediction. The paper presents multiple experiments that support MetaSin as the new state-of-the-art activation function, that also investigates different training setups and hyperparameters. Strengths: Originality: The work seems original and novel Quality: The paper is of high quality and provides both a more theoretical reasoning for why MetaSin should be able to perform better than ReLU, but also backs this up with multiple experiments. Clarity: The paper is clearly written and easy to follow. Significance: As the authors mention themself in their "Broader Impact" section, a potential new SOTA activation function can have a large impact on the community with the potential to increase performance across a large selection of tasks and models. Weaknesses: While I agree with the underlying hypothesis that ReLU networks are most commonly used and therefore it is the most relevant baseline to compare to, I still think the others baselines are relevant, especially for section 5.3 on image classification. The statement in L52-53 is simply wrong, and misrepresent the conclusion in [1]. It is correct that ReLU is sometimes better, but in [1] it is only true for machine translation. For image classification Swish seems to be the better choice, and I am really missing this as a baseline in at least section 5.3. In relation to that in L55-56 the authors mention that most other activation functions are untested on image prediction. In that case it seem obvious that the authors could have tested out more baselines than ReLU. In general standard deviations are missing for most experiments to show if the results are indeed significant at all. I would be therefore be very careful using the term "state-of-the-art". Regarding section 3.2: more details on the CUDA implementation, specifically regarding the kernel fusion, and what the actual gradient of the MetaSin operator is missing [1] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions, 2017. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In addition to some of the weaknesses pointed out in the previous section, I would like the authors to answer: 1. Table 1: what are the computational scaling behaviour of K? It seems important as it potentially could indicate if there is a performance-computationally trade-off for this hyperparameter. 2. Table 5: It seems like increasing K is beneficial for performance. I wonder how far this behaviour can be pushed. What about increasing it further, like 20 or even higher? Smaller correction: L88: there is missing an end bracket in the equation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors are fairly on point in their limitations section that MetaSin. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Swish as a Baseline for Classification** As the reviewer suggested we ran the image classification experiments reported in Table 6 using Swish versions of the baselines. We report the validation accuracies, in comparison to the MetaSin and ReLU results from earlier: | Teacher | WRN-40-2 | WRN-40-4 | WRN-28-2 | WRN-40-2 | | :--------: | :-------: | :-------: | :-------: | :-------: | | **Student** | **WRN-40-1** | **WRN-40-1** | **WRN-16-2** | **WRN-16-2** | | ReLU | 73.39 | 70.53 | 71.76 | 73.65 | | MetaSin | **73.74** | **72.55** | 72.33 | **74.10** | | Swish | 73.28 | **72.54** | **72.98** | 72.60 | We’d be happy to update Table 6 with the additional Swish baseline and modify L255 as “... MetaSin student networks consistently outperform their ReLU counterparts, *and in some cases also the Swish student networks*, …”. We believe that the latter part “... motivating further investigation of using MetaSin activations in classification tasks” is still appropriate and would keep it unchanged. We also greatly appreciate the reviewer pointing our attention to the statement in Lines 52-53. In fact, Swish outperforms ReLU in the image classification experiments presented in Ramachandran et al. 2017, whereas performing comparable to ReLU in others. We will replace the text in those lines accordingly with the following: “... A comparative study of these activations re-affirmed ReLU as a strong baseline, while suggesting Swish might be a better alternative for image classification.” **Significance of Experiments** To investigate this while staying within our compute budget, we ran an experiment on a slight variation of the resampling model from Section 4.1, which uses the same loss, batch and patch sizes for training, and the exact same feature extractor as the original model. The only difference is that the model is trained for 2x upsampling (instead of generic resampling) and the MLP part following the feature extractor is different to accommodate that. As a reminder, we only use MetaSin activations to replace ReLUs in the feature extractor, both in the resampling model we presented in Section 4.1, and in the newly trained upsampling model. The mean and standard deviation computed from 5 independent training runs are: | Upsampling Model | PSNR [db] | | :--------: | :-------: | | ReLU | 31.82 $\pm$ 0.02 | | MetaSin (mean/std) | 32.00 $\pm$ 0.01 | On the denoising side, unfortunately due to the resource intensive training process of the denoiser model from 4.2, repeating experiments was not an option. That being said, in Tables 3, 4, and 5 we present a total of 9 different runs that differ in their initializations, number of *sin* components, and use of KD-B. In all these experiments MetaSin models consistently outperform the DPCN baseline presented, often by a large margin. Additionally, in Appendix I we report similar levels of improvement from a kernel prediction version of the same denoiser augmented with MetaSin. We firmly believe that these consistently strong improvements are very unlikely to be due to randomness in the training procedure. **Details on CUDA kernel** As another reviewer also asked for more details, we address this in the Author Rebuttal section **Computational scaling behavior of K** We ran a benchmark experiment, in which we compare forward and backward latencies of ReLU, MetaSin Native, and MetaSin CUDA functions on the same input tensors using different K values. The results can be found in the pdf file attached to the Author Rebuttal section. As the Reviewer suggested, the results show that the computational cost gets higher as we increase K. **Effect of setting K to a higher number** The reviewer is correct that further improvements in terms of model accuracy can certainly be made by pushing K beyond what we present in the paper. We did not go beyond 10-12 as throughout our experiments we observed that doing so often resulted in diminishing returns. As a more concrete example we provide results from a resampling experiment below: | Activation / Upsample Factor | x2 | x3 | x4 | | :--------: | :-------: | :-------: | :-------: | | MetaSin-10 | 33.09 | 29.42 | 27.17 | | MetaSin-20 | 33.14 | 29.45 | 27.20 | (Note that this is from an older, but in our opinion still sufficiently representative run with slightly inferior hyperparameters, and the training was stopped prematurely after 900K iterations instead of the full 2M - hence the lower PSNRs compared to Table 2) These results suggest that choosing K > 10 would indeed make sense in cases where the absolute best quality is desired and additional latencies are tolerable, but overall K $\approx$ 10 seems to be the happy medium. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you very much for your detailed response to my concerns. I greatly appreciate the additional clarifications and the additional experimental results provided (even if some of the are a bit out-of-date), which answers all of the questions I have regarding the paper. Based on concerns raised by other reviews and the strong answers you have given them I will raise my score by one. I would still greatly advice that some of the details regarding the custom cuda implementation (and thoughts about it) should be included in a appendix for the final version of the paper. --- Reply to Comment 1.1.1: Comment: We are happy that we were able to address all of the Reviewer's questions in our rebuttal. As suggested, we will add a discussion on the CUDA implementation in the final version of the paper.
Summary: The paper proposes a modification of the sine activation function and show that this results in improved performance, compared to RELU and others. The new activation, called METASIN, is a superposition of RELU with several sinusoidal functions, and is motivated by the observation that RELU has a spectral bias to low frequencies. To ensure that training is stable in deep networks, the authors propose a distillation approach, where the activation function parameters, such as amplitudes, are initialized based on a teacher RELU model. The authors show that by combining these two techniques, they outperform RELU and achieve state-of-the-art results on denoising and image resampling tasks. Strengths: - The paper introduces a new activation function that can be of interest to the community, which contains RELU as a special case. - The authors provide an optimized implementation of the activation function that is 3 times faster compared to native PyTorch implementation without increasing the memory overhead. In addition, the authors report that the new activation increases overall training compute by only 3% compared to RELU. However, this does not account for the impact of distillation, which seems to be needed. - The empirical results are strong, achieving SoTA on a few tasks, such as image resampling and denoising. - The paper is well-written and is easy to follow. Weaknesses: - It seems that all experiments are carried out with distillation. The authors state that they also apply distillation to RELU networks but how does that work when RELU does not contain any shape parameters? If the authors introduce an amplitude to RELU, its effect may vanish when also using normalization layers, which may explain why the authors do not observe any impact of distillation in RELU networks. This needs to be clarified. - The experimental setup seems to be different in each experiment. For example, the authors use KD-bootsrapping during the first 5% of training in the denoising experiment but switch to 10% in image resampling. They also use a different initialization of the frequencies. Are these chosen based on a separate validation set? It is not clear if this is the case. - The authors claim in the abstract that the improvement is obtained by "simply replacing the activations". However, the new activations contain trainable parameters that are also initialized/trained using distillation. It is not a simple matter of replacing the activations. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In Line 160, the authors say that the KD Bootstrapping approach comprises only 10% of the total training budget. However, you are also training another model with RELU activations. Wouldn't this increase the training compute by about a factor of 2? Or is the 10% here only for training METASIN shape parameters based on the teacher model? - How do you select the KD-bootsrapping duration in each experiment? Is it based on a separate validation split? If so, please mention this explicitly in the paper. - References are missing parentheses throughout the paper. Consider replacing \cite with \citep. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Distillation and ReLU Networks** The reviewer is correct that our best results in both image resampling and denoising applications are obtained using distillation, specifically KD Bootstrapping as discussed in Section 3.1 of our manuscript. In short: the role of distillation in these experiments is to stabilize training in the early phases and guide the network towards good local minima. Later during the training we turn it off and allow the network to learn the final solution independently without any additional constraints. This is a specific use case for distillation tailored to training MetaSin activations. In general though, Knowledge Distillation is used to enhance accuracy of a student network (usually with fewer parameters) by utilizing a pre-trained teacher network (usually with more parameters). The image classification experiment in Section 5.3 is carried out in this general knowledge distillation setting, where the teacher-student pairs and corresponding accuracies are listed in Table 6. Thus, to make the comparison between MetaSin and ReLU student networks fair, we apply distillation to both student variants. This experiment shows that MetaSin student networks are capable of imbibing more information than their ReLU counterparts from identical teacher models. We’d be happy to modify the text accordingly to clarify the use of knowledge distillation in the aforementioned experiments. **Frequency initialization** Through grid search across many experiments we performed for this work (on fixed validation sets), we empirically found that initializing the frequency shape parameters of MetaSin activations as $f_j = j$, where $j \in [1, K]$ ($K$ being the number of *sin* components) works well in most of our experiments involving convolutional models, which is what we recommend in Section 3.1 as a sensible default. That being said, as with any hyperparameter, slight improvements can be made by tweaking the default value: For instance the last two columns of Table 3 show that $f_j = j/2$ yields better results when training the DPCN denoiser described in Section 4.2. **Duration of KD Bootstrapping** In order to produce our denoising and resampling results we performed KD Bootstrapping for 200K iterations, which correspond to 5 and 10% of the total training time, respectively. Similarly to the frequency initialization, we determined that the 5-10% of total training iterations we recommend as default in Section 3.1 through our observations across various experiments we performed during this work. This duration has been sufficient to stabilize training and guide the network towards good local minima in the beginning, and we observed from that point onwards more KD Bootstrapping steps do not improve the results. **Total training time with KD Bootstrapping** The percentage in Line 160 refers to the ratio of the total training iterations in which we utilize distillation, i.e. incur an additional distillation loss during training. As the computational overhead of the distillation loss is minimal the total training time with or without KD Bootstrapping is roughly the same. That being said, in both our denoising and resampling experiments we had access to the trained baseline ReLU networks. If that is not the case, a ReLU network needs to be trained beforehand, and this should be included in the training budget. We would be happy to modify the text to clarify this point. **”Simply replacing the activations”** We address this more in detail in the Author Rebuttal section as another reviewer raised a similar concern. In short: Although the procedure for replacing the activations (including re-training) is described later in the text in Lines 140-143, we fully agree that our phrasing in Line 12 might be misleading and will modify the text accordingly. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you for answering the questions. I do believe that the paper should clarify the role of distillation in the paper, and avoid phrasings like in Line 12 and the statement about the impact on compute, both of which may give a false impression. I'm happy that you agree to revise those. For the duration of KD Bootstrapping, I'm satisfied with the answer, but it's perhaps worth clarifying in the paper as well. --- Reply to Comment 1.1.1: Comment: We are happy that our rebuttal answered the Reviewer’s questions, and the revisions we proposed for the text were found suitable. In addition to the revisions already discussed, we will extend the text in Line 159-160 with the points we make in our answer to “Duration of KD Bootstrapping”, as the Reviewer suggested.
Summary: **SUMMARY AFTER REBUTTAL**: the authors have addressed most of my concerns and I have increased my score during the rebuttal phase. I believe the novelty of the paper is small if moving beyond their specialized subfield, which is why the overall score remains low. --- The paper proposes a variant of the sin activation function proposed in SIREN, specifically for image reconstruction and image denoising. Instead of using a single sin function, they consider a linear combination of a ReLU function and multiple sin functions, where the weights of the linear combination and the parameters of the sin functions are shared across layers and trained with back-propagation. The networks are trained via a simple knowledge distillation procedure after a drop-in replacement of the activation functions of the corresponding ReLU networks. A series of experiments shows better image reconstruction capabilities with a very small overhead (thanks to a custom CUDA kernel) on two major benchmarks. Strengths: - The paper is very well written and easy to follow. - Experiments are good (see some remarks below), and the improvements are consistent. - The motivation for the proposed AFs is only discursive, but clear. - As far as I know, this formulation is novel with some caveats (see below). Weaknesses: - My main concern is that the idea of building an AF from a linear combination of base AFs whose weights are trained is quite known in the literature. Considering for example the survey on AFs by Apicella et al., 2021 [A survey on modern trainable activation functions], they have an entire chapter on this idea including Adaptive AFs, Variable AFs, Kernel-based AFs, Adaptive Blending Unit, Adaptive Piecewise Linear Unit, etc. - The core contribution of this paper is to apply this idea to sin AFs, i.e., considering a different base set from the papers cited above (which combined other types of functions, such as kernels with different weights, ReLUs and sigmoids, etc.), showing this is useful in the context of image reconstruction. While interesting, this is a niche result that is best suited for a smaller conference or journal with applicative interest. - In addition, the authors are only comparing to standard AFs, although they consider a "MReLU" AF which seems similar to the ensemble AFs described above. It would be helpful to have a stronger comparison to ensemble AFs. Note that a similar paper vertical on PINNs was rejected from last ICLR (https://openreview.net/forum?id=MpGP-z07TmM) due to similar reasons. An additional weakness is that the authors claim their method is a complete drop-in replacement, while it requires a KD step to work correctly. I believe these claims should be amended. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Apart from the previous weaknesses, I would be curious to see more details on their CUDA kernel. Is this done manually, or is it just the result of PyTorch compile procedures? I am also skeptical of their initialization range, since they set to 0 all weights multiplying the sin AFs. Wouldn't this prevent a good gradient flow? More plots of the resulting sin functions before and after training can improve the discussion here. While the paper is well written, there are some citations that appear in a wrong way (e.g., "state-of-the-art resampler Bernasconi et al. [4]"). There is also a small typo on P3 ("it’s predictions"). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: All limitations are correctly described in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison with ensemble activations** Throughout the paper we present comparisons against popular baselines Snake, Mish, Siren, as well as an ensemble activation we call MReLU that is similar to the Adaptive Piecewise Linear Units presented by Agostinelli et al., 2015 and mentioned in the survey by Apicella et al., 2021 (we will add citations accordingly). Tables 2 and 4 in our main paper show that MReLU consistently performs worse than MetaSin in both resampling and denoising applications. Following up on the reviewer’s comments we ran additional experiments using other ensemble activations Adaptive Blending Units (abu), Variable AFs (vaf), Adaptive AFs (aaf) in the same experimental setting that we used to generate the first row of Figure 1 in the main paper. We present the best results we obtained from multiple runs with different initializations below, also including ReLU and MetaSin as reference: | Activation | abu | vaf | aaf | ReLU | MetaSin| | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: | PSNR [db] | 29.79 | 28.98 | 29.06 | 29.81 | 35.04 | We also were able to run two experiments in image resampling setting using abu and aaf versions of the model from Section 4.1, which we present below: | Activation | x2 | x3 | x4 | | :--------: | :-------: | :-------: | :-------: | | abu | 33.02 | 29.31 | 27.09 | | aaf | 32.97 | 29.28 | 27.10 | | ReLU | 33.03 | 29.36 | 27.09 | | Metasin KD-B | 33.26 | 29.58 | 27.29 | Overall the ensemble activations we tested tend to perform roughly similar to ReLU. We would be happy to include the above data points in the final manuscript. **Rejected ICLR submission on PINNs** The main criticism that led to this decision (despite otherwise favorable reviews) appears to be the argument strongly put forth by Reviewer `E27E` stating that: “The physics-informed activation function (PIAC, Eq. (10)) …”, which the submission claims as a novel contribution, “... is **identical** to, among others, the soft-normalized version of the adaptive blending unit (ABU) [1]” (emphasis by `E27E`) As such, we don’t believe the analogy to our work applies, since to the best of our knowledge the MetaSin formulation is novel and all of our reviewers seem to agree with us on this point. [1] Sütfeld et al., Adaptive Blending Units: Trainable Activation Functions for Deep Neural Networks, 2017 **Results are Niche** Our main contributions in this paper include the specific formulation of the MetaSin activations that we arrive at by gradually addressing issues associated with *sin* activations as described in Section 3 in or paper, as well as the training methodology that we call KD Bootstrapping that is described in Section 3.1, which to our knowledge has not been explored in the context neither *sin*-based nor ensemble activations. While, on one hand, we focused our efforts on thoroughly investigating the use of MetaSin activations in two arguably core topics within our target domain of image prediction applications, namely resampling and denoising, on the other hand we believe that it is not unreasonable to assume other applications within this vast domain might benefit from the techniques we present in this paper. In support of this argument, we present various preliminary results throughout the paper, which show the effectiveness of MetaSin activations in NeRF models (Table 8 in Appendix) for predicting novel views, as well as in 2D (Table 7 in Appendix) and 3D (Figure 7 in Appendix) signal representation tasks. The domain of image prediction applications spans a big chunk of classical research problems in image/video processing and computer graphics/vision: segmentation, matting deblurring, tone mapping, depth estimation, view interpolation, inpainting, to name a few. Moreover, one can speculate that our work might find applications in contemporary generative models relying on diffusion models considering our strong results in denoising. Taking all these exciting directions into account, we firmly believe that our work would be interesting to the broader research community. **”Drop-in Replacement” and Details on CUDA kernel** We address both topics in detail in the Author Rebuttal section as other reviewers made similar remarks. **Initialization of MetaSin and Gradient propagation** We initialize the weights to be 0 intentionally, as we aim to prevent the activation from having arbitrary frequencies during the initialization phase. Instead, we allow the network to determine which frequencies to use during training, as we illustrate in Figure 11 in the Appendix. This initialization approach constrains the frequencies to remain relatively stable in the initial stages, akin to a Fourier series. However, as training progresses, all parameters become freely updatable, allowing the frequencies to adapt and evolve over time. **On Presentation** We noticed that the reviewer mentioned that our paper is “very well written and easy to follow” despite assigning a “fair” presentation score. Please let us know In case there are any outstanding points we can improve on the writing side. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the detailed answer. To elaborate on: 1. "clarity score": the score takes into consideration "relation to prior work", which was missing multiple papers on combining activation functions. In addition, the description of the CUDA kernel had no details. Based on the rebuttal, I have increased the score. 2. "novelty": arguing on novelty is always tricky, however, from my point of view building combinations of AFs is not novel. The difference of this work with respect to the other works I cited is that, instead of combining standard AFs (eg, ReLU, tanh, ...) they combine a ReLU and sines with trainable frequencies. This is justified by their applicative domain and their analysis, but it is "niche" in the sense that it is a small modification that only makes sense in this specific set of experiments. I have increased my evaluation to "weak accept", as I believe the paper is clear but the contribution's strength can be discussed (as per point 2 above). --- Reply to Comment 1.1.1: Comment: We are glad that our rebuttal addressed the Reviewer’s concerns on clarity. We will revise the text accordingly to include a discussion on previous methods summarized in Apicella et al., 2021, as well as the details on the CUDA kernel.
Summary: The authors of this paper propose a new activation function, which relies on a parametrized sinusoidal function instead of only a piecewise linear function. The authors show how this function can lead to performance improvements in the setting of denoising. Strengths: - The authors propose a novel formulation for an activation function, relying on a sum of sines of varying amplitude, frequency and phase (all of which are parameters learned during training). The formulation of this activation function is novel, as far as I am aware. - I appreciate the explanation that the authors provide on the usefulness of the sine activation, namely how it enables the network to capture higher frequency components. Moreover, I appreciate the inclusion of the link to Fourier series in the Appendix, and I think it may be an interesting topic that the authors may want to elaborate on. - The authors have performed several experiments to evaluate the performance of their method, not only in their main problems of interest (denoising and resampling) but also more usual classification tasks. They have also made the effort to write optimized code for their proposed activation function (although I highly encourage releasing this to assure reproducibility). Weaknesses: The main weaknesses of the paper are the following (mostly concerning the experimental setup): - While the experiments cover a wide range of tasks, they contain a weakness in that most of them rely on starting from a pretrained network to achieve the best results. While the authors acknowledge this limitation, it would nevertheless improve the paper if results for training without the initial model were included. - Related to the above, the classification experiment only considers the setting where the MetaSin network learns from a pretrained teacher. I believe it would be interesting to include a classification experiment where the new architecture is trained from scratch. - In the Appendix, the authors include a plot that shows how the weights of the activation change during the course of the training. We can see that most of the weight is in the ReLU activation (in other words, the initial form of the MetaSin activation). While this is not inherently bad, it makes the benefit induced by the new activation less clear in my opinion. I believe that this may also stem from the initialization via a pretrained model (which has already converged while using a ReLU activation), and thus may be alleviated if trained from scratch. I would greatly appreciate it if the authors were able to examine this. **Post rebuttal comment**: As mentioned in my response to the author rebuttal below, while most of my concerns have been addressed, the fact that KD bootstrapping is required remains a limitation. However, I have a generally positive view of this work, given that as the authors mentioned in their comments the use of KD bootstrapping for stability, while limiting, is still an interesting approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would be grateful if the authors could elaborate on the issues encountered during training from scratch with their proposed activation. The way the paper is written, it seems to me that training with MetaSin would suffer from the same problems as previous work using sinusoidal activations e.g. SIREN, which is somewhat surprising given that MetaSin contains ReLU as a special case. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations of their work. Furthermore, I see no immediate negative societal impact arising from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **MetaSin without KD Bootstrapping from a pre-trained network** During our experiments we had a chance to confirm firsthand the well-known difficulties associated with training *sin*-based activations, especially when utilized in convolutional networks. These difficulties may even lead to divergence in training: an example can be found in Figure 1 row 2. We haven’t encountered divergence issues when training MetaSin networks with or without KD bootstrapping. That said, our experiments in which we did not utilize KD bootstrapping often yielded sub-par results. For instance, in Table 2 see the last two columns: Removing KD Bootstrapping leads to worse results on average in resampling. The same behavior can be observed likewise in Table 4 for denoising. We hope that these two experiments, where we compare our full method (that is, MetaSin with KD-B), with an ablation where we do not utilize a pre-trained network (MetaSin w/o KD-B) will help the readers to put the contribution of KD Bootstrapping into proper context. If needed, we would also be happy to modify the text to point the reader’s attention to these experiments. **Classification experiment with MetaSin from scratch** To investigate this we trained various Wide ResNets with original ReLU activations and with MetaSin activations entirely from scratch and without using any Knowledge Distillation. The table below presents test accuracy on CIFAR-100: | Model/Activation | WRN-16-2 | WRN-28-2 | WRN-40-2 | WRN-40-4 | | :--------: | :-------: | :-------: | :-------: | :-------: | | ReLU | 72.85 | 74.82 | 75.95 | 78.99 | | MetaSin from scratch | 71.82 | 73.88 | 75.44 | 78.44 | In accordance with the above discussion these results underline the role of KD Bootstrapping for achieving the best results with MetaSin networks. To give a concrete example: as we show above when training from scratch WRN-16-2-MetaSin at 71.82 accuracy lacks behind its ReLU counterpart (WRN-16-2-ReLU) with accuracy 72.85. On the other hand, Table 6 last column of our main paper shows that by distilling from a WRN-40-2 teacher, the accuracy of WRN-16-2-MetaSin can be brought up to **74.10**, whereas WRN-16-2-ReLU achieves 73.65 accuracy using the same procedure. We’d be happy to modify the paper accordingly to ensure that our side investigation into image classification is as informative as possible. **Shape variation of MetaSin activations** To shed some light on the shapes that MetaSin activations take when training from scratch, we produced a visualization of the MetaSin shapes from the resampling network described in Section 4.1. The MetaSin shapes are obtained from the first and last blocks of models trained from scratch and with KD Bootstrapping. This visualization can be found in the pdf file attached to the Author Rebuttal section. The figure shows that, while there is significant local variation between individual MetaSins, globally the rough ReLU shape is still discernible even without any involvement from the ReLU teacher. **Issues when training from scratch:** One of the main challenges when training *sin*-based activations such as SIREN is their dependence on initialization. In Appendix C we present an illustration of this behavior through a set of toy examples. Another challenge when training *sin*-based activations are inconsistent gradients due to complex shapes the activation can take, and large degeneracy in local minima caused by symmetries. These make the training of deep networks especially unstable and may lead to divergence (See Figure 1. Deep CNN/SIREN). With MetaSin we avoid the aforementioned issues by having better coverage of plausible ranges of the shape parameters and introducing the additional ReLU component that stabilizes the training. As such, when training MetaSin models (either with and without KD Bootstrapping) we haven’t encountered dramatic inconsistencies in model accuracy due to initialization, nor encountered any further stability issues during training. The main issue with training from scratch without KD Bootstrapping is that in challenging real-world problems (such as training direct prediction models for denoising and resampling) the accuracy of the model is inferior to the alternative of training with KD Bootstrapping. On the other hand, in simpler tasks, such as the various overfitting experiments we present throughout the paper, models trained from scratch tend to perform just as well. Finally, in Appendix I we present an interesting finding that a kernel-predicting version of the denoiser from Section 4.2 (as opposed to direct prediction) also trains fine without KD Bootstrapping, which we hypothesize is due to the reduced dimensionality of the problem space, but nevertheless remains an interesting direction for further investigation. --- Rebuttal Comment 1.1: Title: Response to rebuttal. Comment: Thank you very much for your detailed response to my concerns. I greatly appreciate the additional clarifications on the points I raised. While my concerns have been mostly addressed, from the resulting experiments it still seems that MetaSin requires bootstrapping in order to achieve good results, both in the original setting as well as in the image classification one. While this is fully acknowledged in the paper, it nevertheless remains a limitation of this work. As such, while I'm still positive towards this work, I am electing to keep my score for now. I am, however, interested in the discussion with the other reviewers as well. --- Reply to Comment 1.1.1: Comment: We are happy to hear that our clarifications were helpful in addressing most of the reviewer’s concerns. We in fact consider KD bootstrapping as a core ingredient of our method that helps alleviate the difficulties associated with training models with *sin*-based activations, which we elaborate on in detail in our paper. To the best of our knowledge the use of knowledge distillation for improving training stability has not been explored before, and we have been positively surprised by the effectiveness of this relatively easy to implement technique.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful comments. We are encouraged that the reviewers are unanimously leaning towards accepting our submission for publication. In the following we start by briefly summarizing our motivations, then address similar remarks by two reviewers about the phrasing we use to describe the usage of MetaSin activations, and finally provide more details on our optimized CUDA implementation. **Motivation and broader implications** Our motivation for this work came from the promising recent results obtained by using MLPs with *sin* activations in visual representation applications including image, video, and 3D shapes. In spite of these findings, however, in the general area of image prediction models, i.e. the large family of models that predict colors of image pixels as their output, ReLU activations still tend to be the default choice. From our own experience, this tendency stems from various issues associated with *sin* networks, such as the training stability and sensitivity to initialization, as well as the lack of methods for utilizing *sin* activations in *convolutional* networks, which enjoy heavy use in the domain of image prediction applications. The aim of the techniques we present in this paper is to pave the way to enable practitioners of this vast domain to reap the benefits of *sin* activations while maintaining the relative training stability of ReLU networks. Our results suggest that there is room for significant improvements even in cutting-edge image prediction models, which can be leveraged through a targeted approach such as MetaSin with KD-Bootstrapping. We believe that our findings might be highly useful to practitioners in related application areas including and beyond the ones we touch on in this paper. **MetaSin as a “drop-in” ReLU replacement** (`9soJ`, `C7v1`) This phrase refers to the process of switching to MetaSin in existing code, which consists of replacing `relu` functions with `metasin(K)`. However, after reading their comments and revisiting our own initial text, we agree with the reviewers that it can be misleading as there are additional steps of KD-Bootstrapping and training the model and shape parameters. We will accordingly replace the two total occurrences of term “drop-in replacement” in lines 9 and 136 with “convenient replacement”, and remove the word “simply” in line 12. **Details on C++/CUDA Implementation** (`9soJ`, `pU6A`) To account for the inefficiency of a naive Python API implementation of MetaSin we implemented custom-optimized fused CUDA kernels for both forward and backward functions of the MetaSin activation in C++. Our implementation can be integrated into the PyTorch and TensorFlow Python APIs. Throughout the development process we also tested native automatic compilation functionalities provided by both frameworks (specifically: jit, torch.compile for PyTorch, and jit and XLA in TensorFlow). While notable improvements over the baseline Python API implementation can be made through the use of these out-of-the-box facilities, we obtained best performances in terms of speed and memory consumption using our custom-designed CUDA functions. Some of the techniques we utilized in our code are as follows: In order to optimize the memory footprint and the inference speed of the MetaSin, we remove the intermediate quantities that the autograd engine computes and instead compute the output and gradient tensors directly from the input and the MetaSin parameters. Moreover, we further optimized the computation speed with improved caching and reduction strategy on warp and block levels. Meanwhile, we have exploited a 2-level reduction strategy based on *pairwise summation* in our backward kernel. This way we could avoid numerical errors in gradient tensors and achieve comparable accuracy to autograd in float32 precision. The reduction in computational overhead using the aforementioned optimizations enabled running the compute intensive experiments we present throughout the paper in a feasible manner, and we believe demonstrates the viability of MetaSin activations for most practical tasks. That being said, other optimizations that we were not able to explore due to time constraints could help reduce the overhead even further. **Small Corrections** Finally, we greatly appreciate that the reviewers took the time to point out various typos, spelling errors, minor latex issues, and so on. We noted them all and will fix them in the final manuscript. We address other remarks individually for each reviewer in the corresponding Official Comment sections. Pdf: /pdf/0b124771312b7dd9f18afdf53a38739a972f3e51.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Accept (poster)
Summary: In this manuscript, the authors propose a effective knowledge distillation framework for semantic segmentation task. Specifically, in addition to traditional knowledge distillation on segmentation masks as well as feature distillation, to better align the dense feature, the authors introduce contrastive learning on both spatial dimension and channel dimension. The proposed Af-DCD loss significantly improves the performance of CNN-based segmentation network via knowledge distillation without data augmentation. Strengths: 1. The motivation is clear, i.e., traditional feature distillation loss is somewhat difficult to contextual information and positional channel-group information. 2. The extensive experiments demonstrate the effectiveness of the proposed method. The ablation study and discussion is abundant and valuable. 3. The proposed method is easy to follow. Weaknesses: 1. Intuitively, the proposed Af-DCD loss can align the dense feature independently, therefore the authors could show the ablation study of baseline+L_{Af-DCD} only and compare the results with baseline+L_{fd}. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have listed the limitation in the supplementary material, i.e., the gain of Af-DCD on Transformer based architecture are not as large as on CNN based architecture. We believe this limitation can be seen as a future direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive review, and the recognition of our motivation, method and experiments. In what follows, we provide detailed responses to address your concerns one by one: **1. Your comments on the weakness about the lack of an ablation study** “Intuitively, the proposed Af-DCD loss can align the dense feature independently, therefore the authors could show the ablation study of baseline$+L_{Af-DCD}$ only and compare the results with baseline$+L_{fd}$”. **Our responses**: **(1)** Yes, our proposed omni-contrasting loss $L_{Af-DCD}^{OC}$ (abbreviated as $L_{Af-DCD}$ in your comments) can align the dense features independently since it is formulated as an effective contrastive distillation learning scheme for transferring dense and structured local knowledge across both channel and spatial dimensions learnt by the pre-trained teacher model to the target student model; **(2)** Following your insightful comments, we perform ablative experiments on Cityscapes and ADE20K datasets using the same experimental setups to Table 4 in the main manuscript. Detailed results are summarized in the below two Tables. It can be seen that **(a)** Baseline$+L_{Af-DCD}^{OC}$ brings significant accuracy gains to baseline models on both Cityscapes and ADE20K datasets; **(b)** Compared to baseline$+L_{fd}$, baseline$+L_{Af-DCD}^{OC}$ gets student models with better accuracy on both Cityscapes and ADE20K datasets, while maintaining almost the same training efficiency; **(c)** The accuracy gain of baseline$+L_{Af-DCD}^{OC}$ is slight on relatively small dataset Cityscapes (0.10% mIOU gain), but it is notably pronounced on much larger dataset ADE20K (0.49% mIOU gain); **(d)** The ablative experimental results reported in Table 4 of the main manuscript have already shown that baseline$+L_{fd}+L_{Af-DCD}^{OC}$ gets student models with 76.44% mIOU and 36.01% mIOU on Cityscapes dataset and ADE20K dataset respectively, which are obviously better than both baseline$+L_{fd}$ and baseline$+L_{Af-DCD}^{OC}$, showing that two loss terms $L_{fd}$ and $L_{Af-DCD}^{OC}$ are complementary; **(5)** We have appended this ablation study to Table 4. The updated version of Table 4 is referred to Table 1 in the one-page PDF file attached in our top-level responses titled **“Author Rebuttal by Authors”**. |Method (on Cityscapes)|mIOU (%)|$\Delta$mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|73.20|n/a|n/a +$L_{fd}$|75.88|+2.68|4.02 +$L_{Af-DCD}^{OC}$|75.98|+2.78|4.06 |Method (on ADE20K)|mIOU (%)|$\Delta$mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|33.91|n/a|n/a +$L_{fd}$|34.92|+1.01|4.32 +$L_{Af-DCD}^{OC}$|35.41|+1.50|4.35 **2. Your comments on our discussions about the limitations of the proposed method Af-DCD** “The authors have listed the limitation in the supplementary material, i.e., the gain of Af-DCD on Transformer based architecture are not as large as on CNN based architecture. We believe this limitation can be seen as a future direction”. **Our responses**: Thank you for accepting the limitations of our method we have discussed in the last Section “Limitations of Af-DCD” of the supplementary material. Here, we add more discussions to further improve the clarification on the main limitation of our method: **(1)** The reason for why the current design of our Af-DCD cannot easily generalize to transformer-based structures is: Af-DCD exploits dense pixel-wise information within each of local patches via the feature partition across both channel and spatial dimensions for formulating contrastive feature mimicking conditioned on the single image input fed to both the pre-trained teacher model and the target student model, but transformer-based structures built upon self-attention modules primarily encode global patch-to-patch feature dependencies in an image input, which appear to be in conflict with each other; **(2)** Previously, we performed a distillation experiment to explore this. In the experiment, we applied Af-DCD to SegFormer (MiT-B4 encoder as teacher and MiT-B0 encoder as student) [1], a seminal transformer-based structure for semantic segmentation. Detailed results are summarized in the below table where Af-DCD only brings 0.31% mIOU gain to the baseline; **(3)** A potential direction to address the above issue is how to preserve local information and make a good alignment of Af-DCD to transformed-based structures. Please allow us to leave it as a future research direction. Method (on Cityscapes)|mIOU(%)|$\Delta$mIOU(%) |:--|:--:|:--:| Teacher: SegFormer-MiT-B4|81.23|n/a Student (baseline): SegFormer-MiT-B0|75.58|n/a Af-DCD|75.89|+0.31 [1] Enze Xie, et al. “SegFormer: Simple and efficient design for semantic segmentation with transformers”, NeurIPS 2021. **Finally**, during the rebuttal phase, we also conducted more experiments to improve ablation studies and added discussions to improve the clarifications of our method. You are referred to our top-level responses titled **“Author Rebuttal by Authors”**, and our responses to the other reviewers for details. Looking forward to your feedback. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Most of my concern has been solved. I tend to accept this paper. --- Reply to Comment 1.1.1: Title: Thanks for the Recognition of Our Rebuttal Comment: Thank you so much for the recognition of our responses. We are glad to see that you tend to accept our paper. We will make more efforts to improve our paper further. Many thanks for your constructive comments, time and patience.
Summary: This paper focuses on knowledge distillation for semantic segmentation and introduces an augmentation-free dense contrastive loss function. The student and teacher feature maps are partitioned into patches, and both spatial and channel contrasting is performed within these local neighborhoods. For contrastive loss, positive/negative feature pairs are formed using tether and student features extracted from the same image without using any augmentations. Experiments were conducted on five segmentation datasets and the proposed approach is shown to perform better than various existing works. Strengths: Paper was easy to follow. Experiments conducted on several datasets. Weaknesses: The title and introduction section emphasize "augmentation-free". However, the motivation/need to be "augmentation-free" is not clear to me. In the introduction, the authors claim using augmentations leads to high resource demand which I disagree. The proposed approach passes the same image through teacher and student networks. One could also use the proposed loss function as it is by passing original image to one network and an augmented version of the image to the other network. The computation cost will be almost same except the augmentation operation cost which is usually small compared to the whole forward/backprop cost. In fact, using appropriate augmentations may even be helpful as the model will learn robust features that are invariant to these augmentations. For most of the pixels (other than those that are close to object boundaries), their neighborhood is surrounded by pixels of the same class, and treating them as negatives in contrastive loss is counter-intuitive to me. Ideally for semantic segmentation, we would want pixels of same class to have similar representations so that they can easily be classified to the same class. In line 197, authors mentioned that they use euclidean distances instead of cosine similarity in contrastive loss without providing any explanation. The main contribution of this paper is the contrastive loss function L_{AF-DCD}. All the other loss functions are from prior works. In order to show the effectiveness of this loss function, authors should compare results with and without the proposed loss when all the other loss functions are present, i.e., comparison between (L_kd + L_fd) and (L_kd + L_fd + L_afdkd). Such comparison is not provided in Table (4). Typo: It should be Table. 3 not 4(a) in line 287. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Why should the proposed approach be augmentation free? The proposed loss can be used with augmentations also. Why euclidean distance in contrastive loss? Current experimental comparisons do not clearly demonstrate the effectiveness of L_{AF-DCD} in the present of all the other losses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive review, and the recognition of the proposed approach and the experiments. In what follows, we provide detailed responses to address your concerns one by one: **1. The first concern about why should the motivation/proposed method be augmentation-free**. **Our responses**: Our motivation/method to be augmentation-free is mainly due to the contrastive learning formulation for supervised semantic segmentation task instead of unsupervised image classification task: **(1)** Most of existing contrastive learning methods adopt the self-supervised formulation (assigning a single binary label to each image pair) conditioned on heavy data augmentations (DAs) to learn a proper representation for a given backbone from a large amount of unlabeled images, while our method addresses semantic segmentation distillation in which classification needs to be pixel-wise; **(2)** For our task, passing the original image to one of teacher and student networks and its augmented version via e.g., crop-resize, rotation and flip to the other network (i.e., the way used in existing methods) usually breaks the geometric feature alignment, i.e., features at the same locations are no longer positional same, which leads to conflict with the pixel-wise classification in semantic segmentation distillation. Therefore, our method passes the same image through teacher and student works; **(3)** Our method creates dense negative samples within each local patch of the same image via the partition across channel and spatial dimensions, and thus is augmentation-free and efficient (see Table 3 & 4); **(4)** Yes, our method can be also used with DAs. Actually, for experiments on PASCAL VOC, etc., DAs such as crop-resize and flip are used following common setups in semantic segmentation distillation, but teacher and student networks still share the same augmented image input. **2. The second concern about why treating the neighborhood pixels of a specific pixel as its negative samples**. **Our responses**: **(1)** At the first glance, it is indeed counter-intuitive to treat neighborhood pixels of a specific pixel as its negative samples since they usually tend to be the same class. However, in our formulation (formula 1), **we already have the feature imitation loss $L_{fd}$** which directly forces student to be the same as teacher at every pixel for each channel, taking the role to attain your mentioned ideal representation learning goal. **The role of our contrastive loss $L_{Af-DCD}$** is to promote the process of transferring dense and structured local knowledge (which can better classify difficult pixels for object/image boundary, small object, object occlusion, difficult category and rare view, as illustrated in Figure 4(c) of the main Manuscript and Figure 5 of the Appendix) learnt by teacher model to student model. That is, $L_{fd}$ and $L_{Af-DCD}$ conditioned on the same source feature pairs collaboratively work to get improved feature distillation, which is verified by the ablation studies in Table 4(a); **(2)** Existing feature visualization work [1] shows that layer-specific feature channels from a pre-trained CNN model usually have changing salient activations across neighboring locations and channels. In line with it, we also perform an ablation study to compare $L_{Af-DCD}$ and $L_{fd}$. **You are referred to our first set of responses to Reviewer wLTq for details**. [1] MD Zeiler and R Fergus, "Visualizing and Understanding Convolutional Networks", ECCV 2014. **3. The third concern about why using Euclidean distance in our contrastive loss**. **Our responses**: **(1)** The reason is simply due to our intuition that improved performance would be attained by choosing the same type of the function $d$ for the basic feature distillation loss $ L_{fd}$ (formula 3) and the contrastive loss $L_{Af-DCD}^{OC}$ (formula 7) conditioned on the same source features; **(2)** We perform ablative experiments to compare our contrastive loss with 3 types of the function $d$ on Cityscapes and ADE20K datasets. Results show that our method with $L2$-normed distance is the best, which supports the above intuition. **You are referred to our third set of responses to Reviewer YQfC for details**. **4. The fourth concern about the lack of an ablation study to compare $L_{kd}+L_{fd}$ and $L_{kd}+L_{fd}+L_{Af-DCD}^{OC}$**. **Our responses**: **(1)** Yes, our core contribution is the augmentation-free contrastive loss function $L_{Af-DCD}^{OC}$ across channel and spatial dimensions (a neat combination of our two basic contributions $L_{Af-DCD}^{CC}$ across channel dimension and $L_{Af-DCD}^{SC}$ across spatial dimension); **(2)** Following your insightful comments, we perform ablative experiments on Cityscapes and ADE20K datasets using the same experimental setups to Table 4. It can be seen that $L_{kd}+L_{fd}+L_{Af-DCD}^{OC}$ performs better than $L_{kd}+L_{fd}$ on both datasets meanwhile maintaining similar training efficiencies, demonstrating the effectiveness of $L_{Af-DCD}^{OC}$ in the presence of all the other losses. |Method (on Cityscapes)|mIOU (%)|$\Delta$ mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|73.20|n/a|n/a $+L_{kd}+L_{fd}$|76.04|+2.84|4.05 $+ L_{kd}+L_{fd}+L_{Af-DCD}^{OC}$|76.52|+3.32|4.27 |Method (on ADE20K)|mIOU(%)|$\Delta$ mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|33.91|n/a|n/a $+L_{fd}+L_{kd}$|35.22|+1.31|4.34 $+ L_{kd}+L_{fd}+ L_{Af-DCD}^{OC}$|36.21|+2.30|4.51 **Finally**, thank you for pointing out the typo in line 287. We will correct it and make a careful job on writing and proofreading to improve the presentation of our final paper. During the rebuttal phase, we also conducted more experiments to improve ablation studies and added discussions to improve the clarifications of our method. You are referred to our top-level responses titled **"Author Rebuttal by Authors”**, and our responses to the other reviewers for details. Looking forward to your feedback. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: Thank you for the rebuttal and the additional ablation studies. The rebuttal address most of my concerns and I increased my rating to 'borderline accept'. --- Reply to Comment 1.1.1: Title: Thanks for the Recognition of Our Rebuttal Comment: Thank you so much for the recognition of our responses. We are glad to see that you have raised your score. We will continue to improve experimental comparisons, discussions, and etc., so as to further improve our paper during the final paper revision. Many thanks for your constructive comments, time and patience.
Summary: This paper points out that existing knowledge distillation methods have been heavily relying on tdata augmentation and memory buffer, which require high computational resources and this is further amplified when it comes to segmentation task that requires relatively higher resolutions of feature maps for processing. To alleviate this complexity, the method called Af-DCD is proposed, which aims to tackle segmentation task by leveraging knowledge distillation based on a novel contrastive learning. More specifically, this method first leverages masked feature mimicking strategy and proposes a novel contrastive learning loss. Experimental results confirm that the proposed method is effective. Strengths: 1. This paper is easy to read and understand. 2. The proposed method achieves the best performance against competitors. 3. Numerous discussions and ablations are presented to validate the choices. Weaknesses: 1. In Table 1 and 2, it seems like ours refers to cumulatively adding all the different methods (SKD, IFVD, CWD and etcs..). The presentation needs improvements. 2. It would be better if the authors cite each methods in Table 1 and 2 (SKD, IFVD..) so that the readers do not have to look up what those abbreviations refer to. 3. In section 4.2, it is only 'stated' that the proposed method performs the best. I don't find any analysis, explanations or reasonings. Moreover, Although FLOPs and Params are also included, there is no texts covering them. 4. In line with section 4.2, section 4.3 and 4.4 also lack explanations or reasoning. At least it is not sufficient. These sections are simply stating what the table or figure shows without sufficient analysis or attempts to deliver insights. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are propoerly addressed in Section E. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive review, and the recognition of the novelty, the effectiveness, and the ablation studies of our work. In what follows, we provide detailed responses to address your concerns one by one: **1. The first weakness** “In Table 1...ours refers to cumulatively adding all the different methods...needs improvements”. **Our responses**: **(1)** In Table 1 and Table 2, **“Ours” actually refers to applying our proposed method Af-DCD independently, but not** cumulatively adding all the different methods (SKD, IFVD, CWD and etc.); **(2)** The symbol “**+**” in each row of Table 1 and Table 2 denotes: applying a specific method independently to the target teacher-student network pair for semantic segmentation on the given dataset; **(3)** We are sorry for this confusion due to the potential misunderstanding of the symbol “**+**”. Following your careful comments, we will remove the symbol “**+**”, clarify the meaning of “Ours”, and improve the presentation of our final paper. **2. The second weakness** “It would be better if... cite each methods in Table 1...what those abbreviations refer to”. **Our responses**: Thanks a lot for your great suggestion. Accordingly, we will add references to all methods (SKD, IFVD, CWD, CRIKD, MasKD, MGD, etc.) compared in Table 1, Table 2 and other related Tables. **3. The third weakness** “In section 4.2...I don't find any analysis... Although FLOPs and Params...no texts covering them”. **Our responses**: We really appreciate your insightful comments. **(1)** In Section “**4.2 Main Results**”, we intend to compare the distillation performance of our method Af-DCD with recent state-of-the-art methods for semantic segmentation. Aiming for a comprehensive comparison, we conduct a lot of experiments on public datasets following general settings in semantic segmentation distillation: **(i)** We first conduct experiments on the most popular Cityscapes dataset to **validate the generalization ability of our method to different types of teacher-student network pair**. From the results shown in Table 1(a), we can see that our method can well handle teacher-student network pairs in which students (e.g., DeepLabV3-Res18 and DeepLabV3-MBV2) have the same segmentation framework but with different backbones. The results of Table 1(b) further show that our method can also generalize well to teacher-student network pairs in which students (e.g., DeepLabV3-Res18 and PSPNet-Res18) have different segmentation frameworks but with the same backbone; **(ii)** Next, we conduct experiments on four other datasets including PASCAL VOC, Camvid, ADE20K and COCO-Stuff-164K to **validate the generalization ability of our method to various semantic segmentation tasks**. From the results shown in Table 2(a)-(d), we can see that our method consistently shows significant absolute mIOU gains (1.42%~3.04%) to different student models on small-size (Camvid), medium-size (Cityscapes and PASCAL VOC) and large-size (ADE20K and COCO-Stuff-164K) datasets; **(2)** The superior performance of our method to existing methods demonstrates the effectiveness of the proposed omni-contrasting distillation learning scheme which transfers dense and structured local knowledge across both channel and spatial dimensions learnt by the pre-trained teacher model to the target student model; **(3)** The basic goal of our work is to leverage a pre-trained high-capacity (large and accurate) teacher model to improve the training of a low-capacity (smaller and less accurate) student model, enabling efficient deployment of semantic segmentation models. Following CIRKD [9], FLOPs and Params are included in Table 1 and Table 2 to compare the computational cost of the teacher and student networks (e.g., at most 19.09$\times$ Params compression and 18.40$\times$ FLOPs compression). **4. The last weakness** “In line with section 4.2, section 4.3 … without sufficient analysis...to deliver insights”. **Our responses**: Although we provide some necessary explanations and analysis in Section “**4.3 Ablative Studies**” and “**4.4 Discussion**”, we agree with you that they are still not sufficient. Restricted by limited page length, we put some detailed analysis and explanations in the supplementary material (see Section B-D), as stated in Line 291-292 of Section 4.3 and Line 307 of Section 4.4. Here, we add more explanations to deliver main insights: **(1)** Note that our main contributions are the Augmentation-free Contrastive Losses $L_{Af-DCD}^{CC}$ across channel dimension, $L_{Af-DCD}^{SC}$ across spatial dimension and $L_{Af-DCD}^{OC}$ across both channel and spatial dimensions (a neat combination of $L_{Af-DCD}^{CC}$ and $L_{Af-DCD}^{SC}$, i.e., our core contribution), which could improve the basic feature distillation loss $L_{fd}$ in our formulation while maintaining training efficiency. Ablative results in Table 4(a)-(b) progressively validate their effectiveness by different loss combinations and datasets, and ablative results in Table 3 validate the training efficiency. Ablations in Figure 3 further study the choices of major hyper-parameters, verifying the robustness of our method; **(2)** Figure 4(a)-(b) provide statistical distributions of feature distance between teacher and student models, and heat map visualizations to validate the key insight of our design: $L_{Af-DCD}^{OC}$ can effectively encourage the student model to mimic dense and structured local knowledge learnt by the teacher model; **(3)** Following your insightful comments, we will include more texts to improve explanations and analysis for Section 4.2, 4.3 and 4.4 of our final paper. **Finally**, during the rebuttal phase, we also conducted more experiments to improve ablation studies and added discussions to improve the clarifications of our method. You are referred to our top-level responses titled **“Author Rebuttal by Authors”**, and our responses to the other reviewers for details. Looking forward to your feedback. --- Rebuttal Comment 1.1: Title: Genuinely Looking Forward to Your Feedback Comment: Dear Reviewer cEUs, Thanks again for your comments and time. As the deadline for the author-reviewer discussion phase is approaching by today, we sincerely hope to hear your feedback to see if our responses solve your concerns. The merits of our work have been consistently recognized by you and all three other reviewers. On the whole, **all your concerns refer to improving the presentation of "Section 4 Experiments"**. To the best of our understanding, we believe that our responses should have cleared your concerns. We genuinely hope you could check our responses, and kindly let us know your valuable feedback. We would be happy to provide any additional clarifications that you may need. Best regards, Authors
Summary: This paper proposes a novel knowledge distillation methods for semantic segmentation, called Augmentation-Free Dense Contrastive Knowledge Distillation(Af-DCD). Af-DCD is a new attempt on the usage of contrastive learning in the task of knowledge distillation for semantic segmentation, which alleviate the problem of high computational resource brought by data augmentation and memory buffer. Af-DCD utilizes feature partitions across both channel and spatial dimensions, allowing to effectively transfer dense and structured local knowledge learnt by the teacher model to a target student model while maintaining training efficiency. Experimental results on mainstream benchmarks including demonstrate the effectiveness of the proposed Af-DCD. Strengths: 1.The experiments are sufficient. A lot of experiments and visual analysis have proved the effectiveness and superior performance of the proposed Af-DCD. 2. The design of Af-DCD is clever and makes use of the structural information of teachers from the aspects of both channel and space. 3. The overall experiment is solid and the code is available, which is nice. Weaknesses: 1. The organization of reference is poor. Reference is not added to specific methods in the table. And there is no reference for MaskKD in the whole paper, which actually refers to [17], which is confusing. The instruction for CKD in section2 actually is the instruction for CWD. 2. Some ablation studies are missing. For example, the lack of combination of Channel Contrasting and Spatial Contrasting in Table 4(a), the choice of function d in formula 7. 3. No distillation experiment of transformer-based structure has been carried out, and it is explained in the appendix that the transformer-based structure gains little improvement from Af-DCD, which limits the generality of Af-DCD. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Is Omni-Contrasting necessary? The combination of Channel Contrasting and Spatial Contrasting in Table 4(a) is needed to demonstrate the superiority of Omni-Contrasting. 2. Please add some ablation studies on the choice of the function d in formula 7, which can not only help to screen the appropriate function but also increase interpretability. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive review, and the recognition of our work. In what follows, we provide detailed responses to address your concerns one by one: **1. The first weakness about the organization of reference**. **Our responses**: **(1)** We agree with you, and will add the references to specific methods in Table 1, Table 2 and other related Tables; **(2)** You are correct, MasKD in Table 1 refers to [17] published in ICLR 2023. MasKD uses a set of learnable embeddings to localize the pixels of interest and generate the distillation masks; **(3)** Yes, in Section 2, [22] should refer to CWD but not SKD which should refer to [20]. We are sorry for these typos in reference; **(4)** Following your careful comments, we will fix confusing/inaccurate references and make a careful job on proofreading to improve the presentation of our final paper. **2. The second weakness and the first question about the lack of ablation study** “The combination of Channel Contrasting and Spatial Contrasting in Table 4(a)”. **Our responses**: **(1)** Following your insightful comments, we perform ablative experiments on Cityscapes and ADE20K datasets using the same experimental setups to Table 4. Detailed results are shown in the below two Tables. It can be seen that the combination of Channel Contrasting (CC) and Spatial Contrasting (SC) performs better than CC and SC, but worse than Omni-Contrasting (OC), on both datasets; **(2)** Note that **the concepts of Augmentation-free CC and SC are two basic contributions of our work**. Under this context, the direct combination of them can be viewed as our Vanilla OC. Comparatively, the proposed OC is smarter and neater, which groups pixels into a number of disjoint local patches and tactfully leverages CC and SC within each local patch instead of the holistic feature maps to better exploit dense and structured local information for contrastive feature mimicking, showing the superiority both in distillation accuracy and training speed; **(3)** Now, it is clear that **Omni-Contrasting is indeed necessary**. |Method (on Cityscapes)|mIOU (%)|$\Delta$mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|73.20|n/a|n/a $+L_{fd}+L_{Af-DCD}^{CC}$|76.23|+3.03|4.13 $+L_{fd}+L_{Af-DCD}^{SC}$|76.26|+3.06|4.18 $+L_{fd}+L_{Af-DCD}^{CC}+L_{Af-DCD}^{SC}$|76.33|+3.13|4.29 $+L_{fd}+L_{Af-DCD}^{OC}$|76.44|+3.24|4.25 |Method (on ADE20K)|mIOU (%)|$\Delta$mIOU(%)|$T_{train}(h)$ |:--|:--:|:--:|:--:| Baseline|33.91|n/a|n/a $+L_{fd}+L_{Af-DCD}^{CC}$|35.72|+1.81|4.41 $+L_{fd}+L_{Af-DCD}^{SC}$|35.22|+1.31|4.45 $+L_{fd}+L_{Af-DCD}^{CC}+L_{Af-DCD}^{SC}$|35.81|+1.90|4.54 $+L_{fd}+L_{Af-DCD}^{OC}$|36.01|+2.10|4.48 **3. The second weakness and the second question about the lack of ablation study** “the choice of the function $d$ in formula 7”. **Our responses**: Following your insightful comments, we perform ablative experiments on Cityscapes and ADE20K datasets using the same experimental setups to Table 4. Specifically, we compare formula 7 of our method with 3 types of the function $d$ including $L2$-normed distance (our choice), cosine similarity (common choice in contrastive learning research) and $L1$-normed distance. Detailed results are summarized in the below two Tables. It can be seen that **(a)** Our method always shows significant mIOU gains to the baseline with all 3 types of the function $d$; **(b)** Comparatively, our method with $L2$-normed distance is the best, which supports our intuition that improved performance would be attained by choosing the same type of the function $d$ for the feature distillation loss (formula 3) and the omni-contrasting loss (formula 7) conditioned on the same source features. Function $d$ in Formula 7 (on CityScapes)|mIOU(%)|$\Delta$mIOU(%)| |:--|:--:|:--:| Baseline|73.20|n/a $L1$-normed distance|75.97|+2.77 Cosine similarity|76.10|+2.90 $L2$-normed distance|76.44|+3.24 Function $d$ in Formula 7 (on ADE20K)|mIOU(%)|$\Delta$mIOU(%)| |:--|:--:|:--:| Baseline|33.91|n/a $L1$-normed distance|35.82|+1.91 Cosine similarity|35.95|+2.04 $L2$-normed distance|36.01|+2.10 **4. The third weakness about the limited generality of our method to transformer-based structures**. **Our responses**: **(1)** The current design of Af-DCD cannot easily generalize to transformer-based structures, as we discussed in the last Section “Limitations of Af-DCD” of the Appendix. **The main reason** is: Af-DCD exploits dense pixel-wise information within each of local patches via the feature partition across both channel and spatial dimensions for formulating contrastive feature mimicking conditioned on the single image input, but transformer-based structures built upon self-attention modules primarily encode global patch-to-patch feature dependencies, which appear to be in conflict with each other; **(2)** Actually, we had a distillation experiment to explore this. In the experiment, we applied Af-DCD to SegFormer (MiT-B4 encoder as teacher and MiT-B0 encoder as student) [1], a seminal transformer-based structure for semantic segmentation. The below table shows the results, where Af-DCD only brings 0.31% gain to the baseline; **(3)** A potential direction to address the above issue is how to preserve local information and make a good alignment of Af-DCD to transformed-based structures. Please allow us to leave it as future research. Method (on Cityscapes)|mIOU(%)|$\Delta$mIOU(%) |:--|:--:|:--:| Teacher: SegFormer-MiT-B4|81.23|n/a Student (baseline): SegFormer-MiT-B0|75.58|n/a Af-DCD|75.89|+0.31 [1] Enze Xie, et al. “SegFormer: Simple and efficient design for semantic segmentation with transformers”, NeurIPS 2021. **Finally**, during the rebuttal phase, we also conducted more experiments to improve ablation studies and added discussions to improve the clarifications of our method. You are referred to our top-level responses titled **“Author Rebuttal by Authors”**, and our responses to the other reviewers for details. Looking forward to your feedback. --- Rebuttal Comment 1.1: Comment: I have read the authors' response. The authors "agree with" many of the weaknesses. In this stage, I would like to see more "rebuttal". Despite the interesting idea of the specific designed unsupervised method for semantic segmentation, this paper need further improvement. So I tend to slightly decrease my rating. --- Reply to Comment 1.1.1: Title: Extra Responses to Your Replying to Our Rebuttal Comment: We sincerely appreciate your replying to our rebuttal. Among all your mentioned questions and weaknesses, **in the rebuttal we faithfully agree with two weaknesses of them** and also provide our responses to address both: **(a)** the first weakness on improving reference organization (**we did not miss any related paper in our original submission**) has already been well corrected in the rebuttal, as we believe your suggestions/comments are truly helpful; **(b)** the other weakness about the main limitation of our method **was frankly pointed out by ourselves and discussed in our original submission**, and we admit it again in the rebuttal and provide pilot experiments and analysis for a better study of it. **We believe** that all the other your mentioned questions and weaknesses have been well addressed, demonstrating the effectiveness of our method. Considering the above facts, our rebuttal is decent and honest-to-truth (but not arguing-against-truth), to the best of our understanding. We sincerely hope you can consider the aforementioned factors in final rating. Looking forward to your reply.
Rebuttal 1: Rebuttal: Dear Reviewers, Area Chairs, Senior Area Chairs and Program Chairs, We sincerely thank all four reviewers for their thorough and constructive comments. We are glad that the novelty, basic experiments and performance of our work have been mostly recognized by all four reviewers. In the past week, we carefully improved the experiments (using all computational resources we have), the clarifications and the discussions of our work to address the concerns, the questions and the requests by all four reviewers. **Summarily, we made the following improvements**: **(1)** To have a better understanding of the effectiveness of our Af-DCD method, we follow the constructive comments/requests by Reviewer YQfC, Reviewer vSYf and Reviewer wLTq and add several sets of ablative experiments on Cityscapes and ADE20K datasets using the same experimental setups to Table 4, including: **(a)** An ablation study to compare the combination of our basic channel contrasting loss and spatial contrasting loss with our omni-contrasting loss; **(b)** An ablation study to compare our contrasting loss with 3 types of distance function; **(c)** An ablation study to compare the results with and without our omni-contrasting loss in the presence of all non-contrasting losses; **(d)** An ablation study to compare our basic feature imitation loss and our omni-contrasting loss; **(e)** Besides the ablation studies reported in the original manuscript, these new ablation studies further demonstrate the effectiveness of our method, and show more insights by experimental observations. **(2)** We follow the constructive comments/suggestions/requests from all four reviewers, and add more discussions and clarifications to improve the presentation, the explanations of our design insights, experiments and method’s limitations. **Finally, in the attached one-page PDF file, all the aforementioned experimental results are summarized in different Tables**. We will include the above experiments and discussions in our final paper. We hope our detailed responses are helpful to address the concerns, the questions and the requests of all four reviewers. Pdf: /pdf/c06ed77027e011cc3aee2d74ca25fab0cba9a2be.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bayesian Extensive-Rank Matrix Factorization with Rotational Invariant Priors
Accept (spotlight)
Summary: The authors consider the problem of matrix factorization of a noisy measurement in the setting of all matrices having an rotationally invariant prior. They provide a non-rigorous but comprehensive theoretical derivation of their results. They also provide a number of experiments validating there theoretical claims. Strengths: - Explicit formulas for the reconstruction of the matrix factors - Strong experimental support for the theoretical claims Weaknesses: - The analysis is limited to the rotationally invariant setting Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - It would be nice to have a short summary and outlook at the end of the paper - In line 67 what do you mean by proper distribution? - Can you briefly elaborate on the relationship between your work and the works 34-36 mentioned in the introduction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. We address the questions below: 1. We will surely include a conclusion section in the final version, also taking into account comments made by the referees. 2. Depending on the nature of the problem, one can consider a general class of priors for the factors, and tries to find the parameters maximizing the posterior. For example, in ref [37] authors consider Gaussian priors and try to estimate the matrices. We will rephrase the wording "proper distribution" in the final version. 3. Ref[34] is a general reference about variational inference. In [35,36], matrix factorization problem is solved when some of the data points (entries of the observation) are missing. In our setting, we have the full observation of data, therefore the problem setting are different. Solving MF with full observed data based on variational inference is studied in ref[37]. In ref[37], authors showed that under Gaussian prior (which is rot. Inv.) the optimal estimate using variational inference approach is a reweighting of the SVD of the observation, which can be seen as a RIE. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarifications. I have slightly increased my evaluation.
Summary: This paper explores the Matrix Factorization problem, which involves estimating the matrices $X \in \mathbb{R}^{N \times N}$ and $Y \in \mathbb{R}^{N \times M}$ given the noisy matrix $S = \sqrt{\kappa} X Y + W$. The focus is on the high-dimensional regime, where $N/M \to \alpha$, and the investigation includes bi-rotationally invariant $Y$ and $W$, as well as symmetric rotationally invariant $X$. The authors examine rotationally invariant estimators, which are estimators that share the same singular vectors as the noisy matrix $S$. The paper derives rotationally invariant estimators based on oracle knowledge of the target matrices and demonstrates that they are also Bayes optimal. By assuming concentration and utilizing replica methods, the paper derives explicitly computable estimators from the oracle estimators. The empirical performance of these derived estimators is investigated and shown to closely match that of the oracle estimators. This suggests that the estimators derived using non-rigorous methods from statistical physics are indeed optimal. Strengths: The low-rank matrix factorization problem with finite-rank matrices is now a well-studied topic. Similarly, the low-rank matrix denoising problem with extensive (diverging) ranks has garnered recent interest. However, results on matrix factorization with extensive ranks have been relatively scarce. This paper aims to fill this gap in the literature by providing results for the matrix factorization problem with diverging ranks and under general rotationally-invariant priors. The paper is well-written overall, and Section 5 provides a concise and easy-to-follow overview of the derivation of the results, which are otherwise quite complex. Weaknesses: The main results of the paper, which are the explicitly computed Rotationally Invariant Estimators, rely on non-rigorous methods from statistical physics. While the empirical results are compelling, it would be valuable in the future to establish a more solid theoretical foundation for these estimators. Moreover, the assumptions made on the matrices $X$ and $Y$ may be considered somewhat unnatural. It would be beneficial for the authors to provide additional motivation as to why the findings of this paper could be of interest to the NeurIPS community beyond the specific problem examined here. This could help clarify the broader significance and potential applications of the research. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors expand on the relevance of the methodology developed in the paper for the analysis of the weight matrices of neural networks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Theoretical paper with no immediate negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. We agree that making the derivations mathematically rigorous is an interesting research direction. We want to bring your attention to the papers: "Optimal cleaning for singular values of cross-covariance matrices" (arXiv:1901.05543) , "A short proof of Ledoit-Péché's RIE formula for covariance matrices ( arXiv:2201.05690), that rigorously derive the optimal RIE for covariance estimation with Gaussian priors. The method developed in these works involves Gaussian integration by parts and Gaussian concentration. We believe that, using this technique we can establish the optimality of the proposed estimator at least for Gaussian priors, and this analysis would not involve replica method and spherical integrals. However such a rigorous mathematical analysis is at the moment beyond the scope of this paper. We agree that the assumptions are strong and far from practice, however the settings considered in the manuscript is one of the first setting that the matrix factorization can be solved optimally in the high-rank regimes, and we believe that this work may open up a way to study this problem in more general settings. Moreover, some of the assumptions are required to show the optimality of the estimator, but the estimator can be used in practice under milder conditions to get a spectral estimate that can be refined. Concerning the specific question on neural networks: We believe that our method can be applied to study the weight matrices in neural networks, at least when there is no non-linearity in the system. Non-linearities are prevalent in neural networks models which does not bode very well with rotation invariance, but it might be possible to consider "linearizations" such as an NTK approximation to circumvent this issue. These is an interesting open problem.
Summary: This paper considers a matrix factorization problem in a setting where the rank of the factor matrices grow linearly with the ambient dimensions. They assume that the factors follow a prior distribution such that: (1) One of the matrix factor is symmetric, (2) Both factors and the noise are drawn from rotationally invariant distributions. (3) The priors are known to the statistician. They propose to study a class of rotationally invariant estimators. They derive a closed form expression for the oracle estimator in this class, and show that it is Bayes optimal. They propose an estimator and conjecture that its performance matches the oracle. They provide evidence of their conjecture through experiments. Strengths: This paper seems to be the first one that explores MF in this challenging setting, and opens up a new research direction that might be of interest. They derive a closed form neat expression for the Bayes optimal estimator, although this cannot be implemented in practice. As an alternative, they propose an estimator that can be implemented in practice. Although their result is non-rigorous, simulation suggests that this is the correct thing to do. A sub-optimal estimator for one factor that does not require prior knowledge of $\mu_X$ is also proposed. Their presentation is nice and clean. Weaknesses: Although this paper presents nice technical contributions, it is not clear why this prior structure should be considered in practice. It would be nice to give a few practical examples which past results can not cover but this work do. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I am wondering how sensitive is the proposed estimators to misspecification of prior. 2. If the prior is not presented, is there a way to estimate them? I feel the assumption that prior is known for both the factors and noise is a bit strong. Perhaps the authors should comment a little bit. For example, explain when it is reasonable to assume this information is given. 3. Several literatures that might be useful to include: Information-theoretic limits of MF: [1] Bayes-optimal limits in structured PCA, and how to reach them (arXiv:2210.01237) [2] Fundamental Limits of Low-Rank Matrix Estimation with Diverging Aspect Ratios (arXiv:2211.00488) AMP for rotationally invariant matrices: [1] Approximate Message Passing algorithms for rotationally invariant matrices (arXiv:2008.11892) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are clearly reflected in the model assumption, and societal impact not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for the review and additional suggested references. We will add those (with possibly other references) in the final version. We agree that the required assumptions are restrictive and do not necessarily hold in practice, however as mentioned in the manuscript the proposed estimators can be used without the assumptions to get a sub-optimal estimate that can be processed further. Moreover, as pointed out in the comments, this work is the first one to consider the problem in the high rank regime, and we believe it may open up a new way to study the problem in this regime. We address his/her questions below: 1. Analyzing the sensitivity of the estimators to the mismatched priors is an interesting object of study that requires independent investigation, and is beyond the scope of this manuscript. There exists a rich variety of scenarios that one can consider. Here, we give a brief speculation on the performance of RIE with mismatched priors. $\bullet$ Mismatched prior on the signal. The RIE for estimating $\mathbf{Y}$ is independent of the prior of $\mathbf{Y}$, and it is optimal in the case of mismatched prior on $\mathbf{Y}$. For the case of estimating $\mathbf{X}$ with mismatched prior, the estimator $\sqrt{\widehat{\mathbf{\Xi}_{X^2}} (\mathbf{S})}$ is applicable and is independent of the prior, although sub-optimal. However, the RIE for $\mathbf{X}$ depends on the prior, and misspecification of the prior leads to the poor performance of RIE (comparing to the Oracle). But, we believe that the RIE can get a non-trivial estimate of $\mathbf{X}$ (which is indeed sub-optimal), as the parameters $\zeta_1, \zeta_3$ in eq. (8,9) are correctly evaluated. We provide a numerical check of the performance of RIE in this case in the file uploaded as official comment above. $\bullet$ Mismatched prior on other factors. Misspecifying the priors for the other two factors (than the signal), leads to the incorrect evaluation of the parameters for RIE that can change the performance significantly. For example, consider estimating the matrix $\mathbf{X}$, with $\mathbf{Y}$ and $\mathbf{W}$ both Gaussian matrices but with variances unknown to the statistician. Assuming mismatched variances (different than the real ones) for $\mathbf{Y}$ and $\mathbf{W}$, leads to rectangular R-transforms which are equal to the true ones times some constant. This results in a non-trivial mismatch in the parameters $\zeta_1, \zeta_3$ (eq. 8,9) used in the estimator, which will change the estimated optimal eigenvalue. We provide a numerical check of the performance of RIE in this case in the file uploaded as official comment above. 2. We agree that the assumptions are strong, however note that these assumptions are required to show the optimality of the estimators. In general, for $N$ large enough estimating the prior on one factor is possible if the prior of the other two factors are known. For example, knowing the prior of $\mathbf{X}$ and the noise, we can estimate the spectral distribution of $\mathbf{Y}$, using the spectral distribution of the observation. For this, one needs to go through the free additive/multiplicative convolutions: From spectral measure of the observation and the noise, one can find the rectangular R-transform of spectral distribution of the product $\mathbf{XY}$, then again using the knowledge of prior of $\mathbf{X}$ one can find an estimate of the spectral distribution of $\mathbf{Y}$. Moreover, even if we consider specific classes of priors for the factors we do not necessarily find unique estimates. For example, in the simpler setting of additive denoising $\mathbf{X} + \mathbf{Z}$, under the assumption that both $\mathbf{X}$, $\mathbf{Z}$ have i.i.d. Gaussian entries with unknown variances, it is not possible to estimate the variances uniquely for both $\mathbf{X}$,$\mathbf{Z}$. Rotational invariance for the priors might not be very natural in practice, but our estimators could be used as an initialization for other algorithms (for example iterative ones - although for high rank these remain to be investigated). Additionally, assuming other priors than Gaussian does not lead to neat expressions for the estimators and the systems of equations must be solved numerically, which is impractical. On the other hand considering Gaussian priors, especially for the noise, is a common assumption in practice. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the detailed clarification. I have increased my rating.
Summary: For a matrix factorization model S = \kappa XY + W, this paper proposes a method for estimating X and Y from S under the assumption that priors of X, Y, and W satisfy certain rotation invariance properties and their distributions of eigen/singular values are known. The proposed method is rather simple. First, using the singular value decomposition, we assess the left and right singular bases of S, which are eigen/singular bases of X and Y. Next, keeping the bases fixed, the eigen/singular values of X and Y are adjusted for minimizing the average mean squared errors, which can be analytically performed with the knowledge of limiting distributions of eigen/singular values of X, Y, and W. The Bayesian optimality is shown for the proposed estimators using the replica method from statistical mechanics and random matrix theory. Strengths: As long as I know, this is the first paper that shows a concrete practical method for constructing the Bayes optimal estimator for O(N) rank matrix factorization problem. Weaknesses: The shown optimality holds under rather many assumptions (rotation invariance, knowledge of limiting eigen/singular value distributions) are necessary. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I am curious about what happens when c is set to zero for the shifted Wigner. Does it cause any singularity for the estimator? Or, does the estimator of X continuously converge to zero matrix as c -> 0? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positive comments. We address his/her question below: $\bullet$ For $c=0$, the prior on spectral of $\mathbf{X}$ is symmetric, $\rho_X(x)=\rho_X(-x)$, and using the definition of Stieltjes transform one can see that indeed from eq. (7) the estimator is 0 for all eigenvalues. We conjecture that the estimator is continuous at $c=0$. Analyzing rigorously the continuity of the estimator as a function of $c$ is non-trivial, as one needs to consider the effect of $c$ on the limiting spectral measure of the observation $\mu_S$ which enters into the parameters $\zeta_1, \zeta_3$. However, ignoring this technicality, the function $G(z) + G(-z)$ in the estimator converges continuously to 0 as c goes to 0. Moreover, our numerical checks support the continuity of the estimator. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I am satisfied with it.
Rebuttal 1: Rebuttal: Numerical results on sensitivity of RIE to mismatched priors ( response to first question of Reviewer LQDn) In figure 1, the spectral distribution of $\mathbf{X}$ is uniform on $[0,4]$, and both $\mathbf{Y}, \mathbf{W}$ are Gaussian matrices. We applied the RIE, assuming two different misspecified priors for $\mathbf{X}$:Shifted Wigner with $c=2$, Wishart with aspect-ratio $1$. Note that, the estimator $\sqrt{\widehat{{\mathbf{\Xi}_{X^2}^*}}(\mathbf{S})}$ does not require the knowledge of prior of $\mathbf{X}$, and we get the same performance for both cases, although sub-optimal. The RIE $\widehat{{\mathbf{\Xi}_X^*}}(\mathbf{S})$ performs worse than the Oracle estimator, but still it provides us with a non-trivial estimate. In figures 2, we consider estimating $\mathbf{X}$ with Wishart prior. $\mathbf{Y}$ has uniform spectral distribution on $[1,3]$, and the noise matrix $\mathbf{W}$ has Gaussian entries. The RIE is applied with the assumption that $\mathbf{Y}$ is Gaussian so that its spectral distribution is the square-root of Marchenko-Pastur law (whose support is $[1,3]$). As discussed in the comment, we see that the RIE performs poorly, comparing to the Oracle estimator. However, note that the normalized MSE (normalized by the norm of the signal) is below $1$, and we get a non-trivial estimate of the signal matrix $\mathbf{X}$. Pdf: /pdf/7cabf8facc315f3e5a80c52ff42f088cb766d781.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Guarantees for Self-Play in Multiplayer Games via Polymatrix Decomposability
Accept (poster)
Summary: The paper studies theoretical performance guarantees for agents learned using self-play in multiplayer games. Self-play is a common approach of machine learning in multi-agent systems for generating unbounded quantities of training data, but agents trained using self-play may perform poorly against new agents whose behavior differ dramatically from those seen during training. Despite guarantees previously established for self-play agents in 2 player constant-sum games, these guarantees do not extend outside of two-player constant-sum games. To solve this problem, the authors identify a structural property of multiplayer, general-sum game and use it to establish guarantees on the performance of strategies learned via self-play against new opponents. They show that any game can be projected into the space of constant-sum polymatrix games, which enables performance guarantees for the strategies produced by a broad class of self-play algorithms. The findings are empirically demonstrated on Leduc poker. Strengths: - I find the studied topic which extends the theoretical guarantees of self-play beyond two-player (constant-sum) games to be important to the multi-agent learning community. - The proposed method, by projecting a general game into the space of constant-sum polymatrix games, is novel and interesting. - The theoretical results are verified on 3-player Leduc poker Weaknesses: The technical sections of the paper is slightly hard to follow for a non-expert in this field. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can the authors pleaes explicitly list the assumptions/conditions needed for the theoretical results? - Can the structural property identified in this paper extends to other application domains (outside of multi-agent self-play)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I don't see negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. ### Response to Questions **Assumptions.** The main assumption we make is that self-play is performed by a no-regret learning algorithm. This means the average regret (the difference in utility between the chosen strategy and some hypothetical deviation) will get driven to 0 as the number of iterations increases. The set of deviations makes a difference on the type of resulting equilibrium, so we assume that the no-regret learning algorithms minimize external regret. This is a weak form of regret and almost every no-regret algorithm minimizes a stronger type of regret. We also assume that an agent learning via no-regret self-play will extract their strategy via marginalization in order to play it against new agents. **Extensions.** One of the fundamental problems in game theory is the equilibrium selection problem. Equilibria are fixed points where agents do not want to change their strategies; however if two agents found different equilibrium strategies (say, in self-play), then we do not generally know if the resulting joint selection of strategies is itself an equilibrium. This problem makes choosing a good strategy against new agents very hard. In two-player constant-sum games, all equilibria are exchangeable, which means that a joint selection of equilibrium strategies is also an equilibrium. This solves the equilibrium selection problem for this class of games. Subgame stable CSP games also solve the equilibrium selection problem in n-player games, since equilibria of the whole game also are equilibria of the subgames, which are two-player constant-sum and hence exchangeable. This means equilibria of the whole game are also exchangeable. --- Rebuttal Comment 1.1: Comment: Thanks for taking the time to respond! The clarification of the assumptions is useful to know, and thanks for commenting on the extensions!
Summary: This paper asks and answers the question: "In what games does self-play (with regret-minimizing algorithms) in multiplayer imperfect-information games perform well?" The motivation behind the question is that such algorithms, such as multiplayer versions of CFR (which has theoretic guarantees in 2-player zero-sum games), have empirically performed well in some multiplayer games, despite no previous theory guaranteeing that they will do so. (The salient example is no-limit Texas Hold'em, in the work "Superhuman AI for multiplayer poker", Brown & Sandholm 2019.) This paper answers the question primarily by looking at the concept of the coarse correlated equilibrium (CCE), since that is what the policies of the regret-minimizing players will converge to. The authors show that if games can be decomposed into a bunch of 2-player constant-sum games, called a constant-sum polymatrix (CSP) game, and the CSP game fulfills some additional properties, then we can bound the worst-case performance of a player's CCE strategy. They also show that even in games which can't be factored into polymatrix games (which is most games), we can project them into the space of CSP games, and still bound the worst-case performance of each player's CCE strategy. They also refine the analysis further by examining worst-case performance only against other self-play strategies. Finally, they perform experiments on multiplayer Leduc poker, and show that the empirical worst-case performances are within the bounds predicted by their theory. Strengths: - I think this is an excellent paper. The question it asks and answers is fascinating, and one that I think was begging to be examined for several years now. Therefore, I judge this paper to be a significant contribution to science. - The results of the paper are novel and nontrivial. - The paper is well-written. The writing is clear and understandable. The introduction explains the background and motivates the problem well. Weaknesses: The paper mentions no-limit Texas hold'em as a motivating real-world positive example of self-play working in multiplayer games. I also suggest that the authors mention some real-world negative examples -- for example, FAIR's line of work on Diplomacy, particularly the self-play agents performing poorly against humans in "No-Press Diplomacy from Scratch", Bakhtin et al 2021 https://arxiv.org/pdf/2110.02924.pdf (the problem is the motivation of "Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning" Bakhtin et al 2022: "the resulting agent DORA does very well when playing with other copies of itself. However, DORA performs poorly in games with 6 human human-like agents.") typo on line 336: "with ... be the set" Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It's not clear to me how important the caveat in Footnote 3 on page 8 (section 6) is. So CFR does not necessarily converge to a CCE. The experiments in Section 6 use CFR. Is my interpretation correct that you take the marginal strategies of the CFR iterates, and hand-wavingly say that we can assume they are marginal strategies of (a distribution converging to) a CCE, even though they're not? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review. ### Response to Weaknesses We think this is an excellent suggestion for our work, and we will definitely mention this. Since submission, we conducted additional experiments on a toy version of Hanabi (another game where self-play is known to perform well in training but poorly against new agents) and found that this game was not well-approximated by a CSP game. ### Response to Questions The footnote was in order to clarify a detail about the implementation of CFR. The OpenSpiel CFR is indeed guaranteed to produce the marginal strategies of a CCE, since the average of the marginal strategies is equal to the marginal of the average strategy profile. However, if one wanted to extract the actual joint distribution across pure strategies, one would need to use a method like CFR-JR. We will clarify this detail. --- Rebuttal Comment 1.1: Title: response Comment: Thanks for taking the time to respond. Glad to hear that my suggestion is good. The Hanabi experiments sound useful. Re: response to questions -- I think I understand now. Is this a correct take?: The distribution of joint strategies produced by CFR iterates does converge to a CCE. However, the footnote is explaining that if you take each player's marginal/average strategy, the strategy profile that you get from combining all of those is not a CCE. But this is fine, because the whole point of the paper is to analyze those marginal strategies anyways. --- Reply to Comment 1.1.1: Comment: Yes that is correct! Just to add a couple details: CFR iterates are behavior strategies, whereas CCE are distributions over pure strategy profiles. This is why you need an extra step with CFR-JR to convert the behavior strategies to equivalent mixed strategies when you want to extract the empirical distribution of play. But you don't need to do this to extract the marginals in behavior strategy form. The marginals *could* be CCE themselves (if they are a Nash equilibrium) but this is not necessarily the case.
Summary: In multiplayer games, the authors derive bounds for the vulnerability of marginal strategies trained via no-regret self-play against other, uncorrelated agents independently trained via no-regret self-play. This is done by projecting games onto the space of constant-sum polymatrix (CSP) games, which can be decomposed into a set of 2-player constant-sum games between individual players. The closer this projection is, the tighter the bound on self-play's vulnerability to other self-play-trained agents is. This claim is validated by demonstrating that 3-payer Leduc can indeed be closely approximated as a CSP game and that the calculated vulnerability bounds are relatively close to the empirically measured vulnerability among marginal CFR strategies from many seeds. Strengths: - A bound on the vulnerability of marginal no-regret strategies to other uncorrelated no-regret strategies is novel and highly useful to the game theory/AI community. - The paper is well written and concepts are clearly explained. Weaknesses: - I would have preferred to see a range of common tractable games examined to get an intuition on when and how often a game can be closely approximated as a CSP game (and thus provide a useful bound on vulnerability). Currently, experiments only include Leduc Poker. - The analysis method is not immediately transferable to large games, and this is clearly stated as a limitation. - Minor note: It's not explicitly stated until the very end of section 6 that the 3-player variant of Leduc Poker is used for experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are there any heuristics one may be able to look for in larger games that might indicate when a game has a high chance of being approximately decomposable as a CSP game? If there is intuition on this to be gained from explicitly analyzing smaller games, it would be a great thing to discuss. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: All limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and interesting questions. ### Response to Weaknesses 1. We agree that it would be interesting to validate our approach on a larger suite of games. Since submission, we have conducted experiments on a toy Hanabi game and found that self-play performs poorly against new opponents and the toy Hanabi game is not well-approximated by a CSP game. 2. We think better algorithms for decomposing large games is a great direction for future research. Our analysis still applies if you can analytically show that a large game is subgame stable CSP. Since submission, we have also improved our algorithm which allows us to more efficiently decompose Leduc poker, which already has about 25,000 information sets. 3. We will add this clarification. ### Response to Questions We think that poker might be well-approximated by a CSP game because for most hands, all but two players fold relatively quickly, which means the game really looks like a set of two-player games. Bad Card shows this intuition by having the dominated strategy reduced game being CSP, but the overall game isn’t. One could also look for independence between the interactions between agents. For example, if an agent were to simultaneously play two games of chess, this overall “game” would be CSP. --- Rebuttal Comment 1.1: Comment: Thank you for your replies. Adding Hanabi is a great addition. That addresses the one meaningful weakness I had with this work. I also appreciate that you plan to clearer distinguish that, by 'self play', you mean 'no-regret self play' as hBPZ mentioned. This could lead to a small amount of initial confusion for some readers.
Summary: This paper explores the intriguing problem of why no-regret algorithms seem to approximate well in multiplayer games, a phenomenon that has been empirically demonstrated in multiagent poker. The authors identify a structural property in multi-player games that allows performance guarantees for strategies derived through self-play algorithms. They propose that multi-player games can be projected into a series of two-player constant-sum games, known as polymatrix games. The proximity of a game to this structure is hypothesized to diminish the effect of correlation issues on the removal of a mediator. The researchers take an algorithm-agnostic approach, which broadens the applicability of their analysis to a variety of game-theoretically inspired learning algorithms and MARL algorithms that converge to coarse correlated equilibria. Strengths: The paper's strength lies in its novel approach to an important problem. The authors introduce theory that all multiplayer games can be projected onto the space of polymatrix games, offering a promising direction for further research. If a high subgame stability game exists within this space, no-regret learning algorithms are predicted to converge to low-exploitability strategies. Additionally, the authors' algorithm-agnostic approach ensures that the analysis remains broadly applicable to a variety of game-theoretically inspired learning and MARL algorithms. Weaknesses: The paper's main weaknesses lie in its experimental section, which comes across as somewhat unclear. The section seems intended to demonstrate that CFR will always converge to a low-exploitable strategy due to the subgame stability of the game, but this is not clearly stated. If the goal is to show that no-regret algorithms will always converge, the experiment design should include many seeds with different hyperparameters and many different no-regret algorithms. Secondly, it's unclear how this theory can be practically applied. For instance, if a new game is introduced, can it be predicted ahead of time whether CFR or another no-regret algorithm will converge to a low-exploitability strategy? This potential application isn't clear and should be further highlighted if it is indeed feasible. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How can the theory be applied practically? Can it be used to predict whether a no-regret algorithm will converge to a low-exploitability strategy in a new game? Could the use of the term 'vulnerability' over 'exploitability' be explained or justified in this context? What are the detailed parameters and specifics of the experiments conducted, such as the number of seeds and the range of no-regret algorithms tested? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are certain limitations in the terminology used in the paper. The authors use 'vulnerability' instead of 'exploitability', although 'exploitability' is the term most widely used and recognized in this field. In addition, the term 'self-play' can lead to confusion, as it can generally refer to methods where RL agents play against themselves, which are not no-regret. The term 'no-regret' would be clearer. Lastly, at the end of page 9: “and if there exists a game with this set with high subgame stability” do you mean “and if there exists a game in this set with high subgame stability”? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. ### Response to Questions **Practical applications.** This work can be used practically to show in which multiplayer games pre-computing a strategy via self-play is desirable. It absolutely can be used to predict whether the strategy of a no-regret algorithm will have low vulnerability. We know that for any two-player constant-sum games, strategies learned via no-regret self-play have performance guarantees. We see the practical use of our theory in much the same way: showing a particular game/application is subgame stable CSP will give similar theoretical guarantees. Showing a game has these properties can be done analytically by reasoning about the structure of a game. One could also sample strategies and empirically test whether a game is subgame stable CSP in the neighborhood of these strategies using our algorithm. This approach could quickly build intuition about a game before formally proving it is subgame stable CSP. We also think our work could be applied in mechanism design for multiplayer settings. A mechanism designer could be sure to design a game to be subgame stable CSP. Behavior of agents would likely be more predictable and stable. **Vulnerability vs Exploitability.** We chose to use the term “vulnerability” over “exploitability” in our work since “exploitability” already has a different meaning in the literature for n-player games. For example in this paper (https://www.ijcai.org/proceedings/2022/0484.pdf) exploitability is defined as the average incentive to deviate across players. This is a different quantity than vulnerability, which we define as the difference in utility between a strategy profile and the worst-case joint deviation by $-i$. **Experiment Details.** We tested a single no regret algorithm: vanilla CFR (the code is here: https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/cfr.py). We used CFR with simultaneous updates. The only modification we made was to allow random initialization of CFR’s initial strategy, which is uniformly random by default. We used 30 random seeds. We chose CFR since it is a widely used algorithm and efficient for large games. ### Response to Limitations We will clarify the distinction between the usual RL self-play and no-regret self-play. We do mean the latter sentence. Thank you for catching that. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks for answering my questions. After reading the rebuttal and other reviews I choose to keep my score at a 6.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Data Minimization at Inference Time
Accept (poster)
Summary: This paper questions the necessity of soliciting all user information at inference time, with an eye towards privacy-sensitive domains such as law, recruitment, and healthcare, where learning models typically require access to extensive and sensitive user data for accurate prediction. The authors propose in some settings, individuals might only need to provide a small subset of their information to obtain accurate predictions, thereby protecting their privacy and easing the burden on organizations verifying the accuracy of disclosed information. To that end, the authors introduce a framework for simultaneously considering model certainty and data minimization. Their setup includes considering subsets of features that individuals can disclose in order to receive comparable prediction quality while providing fewer features. The authors propose algorithms for choosing the appropriate subset and present theoretical justification and empirical findings. Initial evaluations reveal that individuals may only need to disclose about 10% of their information to achieve the same accuracy level as a model that uses all user information. Strengths: The primary strength of this paper is that it thoughtfully addresses a neglected and high-impact research question. As the authors note, there is strong evidence that "most users would only need to reveal a small portion of their sensitive data to achieve accurate model predictions with either absolute certainty or high confidence." They include a strong motivating example in Section 2, Figure 1, to make it clear that it's possible that a label could be obtained without requesting all sensitive features. Similarly, they underscore the importance of this work by noting the real-world impacts on privacy and the burden of verifying the provided information. Other strengths: - This paper includes a thoughtful framework for studying this problem, quantifying the tradeoff between model certainty (though not model performance, as claimed) and privacy loss. - The proposed methods include an efficient algorithm for a joint Gaussian setting as well as a less efficient Bayesian modeling alternative - The experiments appear well-designed, including considering random subsets of "private" features using publicly available datasets Weaknesses: The primary weakness of this work is the use of entropy as a proxy for model performance. While the authors state that initially that their goal is the "produce accurate or nearly accurate predictions during inference" using fewer features, the goalposts subtly shift to "accurately predict[ing] the **output of the model** using the smallest possible number of sensitive features." This shift motivates the use of prediction entropy, rather than predictive performance, as a metric for the impact of obtaining additional features. This is significant because while entropy obeys the "information never hurts" principle, predictive performance does not (see, e.g., https://proceedings.mlr.press/v97/ustun19a.html). Considering predictive performance (instead of entropy) likely makes this problem more difficult, and it's possible that using entropy is a reasonable proxy in some cases. However, this choice merits at minimum discussion and ideally theoretical and experimental justification. Other weaknesses: Unclear/imprecise notation and language: - "data leakage" is used inconsistently throughout the paper. It is introduced as "the percentage of sensitive features that are revealed unnecessarily, meaning that their exclusion would not significantly impact the model’s output," but the authors later state that "[increasing $\delta$ yields] reduced data leakage but also less precise model predictions" which appears contradictory. This term is later used as a metric in the experiments, but without a precise definition and with axes that seem to contradict prior use. - on line 155-156, the authors state that the imputation does not occur, and the "unrevealed features are treated as random variables and are integrated during the prediction process." This process is not further explained but appears to be implicit in Equation 5 with a Schur complement (and arguably as a form of imputation). - "public" is used both to describe a subset of features "public x_P and sensitive x_S features" and a subset of samples "trained on a public dataset" - The hyperparameter T appears on line 9 of Algorithm 1 but is not discussed from what I could tell, either theoretically or empirically - In line 13 of Algorithm 1, it is unclear if feature j* is actually obtained here or if it is added to the set of features to be obtained. I assume the former. - The assumption that the cost of obtaining each feature is uniform should be explicitly stated. Other: - The authors claim throughout the paper that they are the first to study this type of work (line 47, line 420) despite the presence of several other publications on this topic (e.g., https://arxiv.org/abs/1602.03600 or https://proceedings.mlr.press/v5/yu09a.html) - Some words were used improperly (e.g., "contrast" on line 9, "recur" on 196, "valid" (sufficient?) on line 227). Overall, could use another pass to make the writing more clear. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you explain or justify your use of model entropy as a proxy for model performance? - The thresholding explanation on line 190 was a bit confusing -- can you explain this process in more detail? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and review. Below, we report our answers to your questions. Feel free to let us know if there are further questions or concerns and we'll be more than happy to elaborate. ### Comment on hyperparameter T You are right, and we appreciate the catch. $T$ is indeed the number of Monte Carlo samples used in the algorithm. We acknowledge that this was not clearly explained in the original text, and we will include a more detailed explanation of this hyperparameter in our revision. ### Comment on line 13 of Algorithm 1 Your assumption is correct. In line 13 of Algorithm 1, we identify the most promising unrevealed feature $j^\star$ that provides the most information about the model's prediction. The value of this feature is then obtained from the user, and we remove it from the set of unrevealed features $U$ and add it to the set of revealed features $R$. ### Comment on cost of obtaining each feature Thank you for highlighting this point. We indeed operate under the scenario in which the cost, whether it's the privacy cost or the cost of obtaining the feature value for sensitive features, is uniform across the features. This assumption will be explicitly stated in our revised manuscript to ensure clarity. ### Comment on related work We appreciate the reviewer for highlighting these relevant works. Upon review, we found that the papers shared focus on different aspects, such as training time, reinforcement learning (in the first paper), and multi-view learning (in the second paper). Our work, in contrast, specifically targets the testing phase where a pretrained classifier is given. Therefore, while the cited works are indeed valuable and related, the context and focus of our research distinguish it from the prior studies. This helps clarify the claim made in lines 47 and 420 about the novelty of our approach in this particular domain. We will be happy to discuss such related work in the final version of our paper (please also see current discussion of related work in Appendix B). ### Q1: Model performance Indeed, the use of model entropy in our work serves as a tool to measure the uncertainty associated with the model's predictions. We focus on the trade-off between revealing sensitive features and maintaining prediction accuracy and want to achieve better accuracy for the whole population while minimizing data leakage, which includes revealing fewer features. The model entropy helps us quantify the uncertainty in the model's predictions. As we reveal more features, the entropy typically decreases, reflecting that the model becomes more confident in its predictions. Conversely, if we reveal fewer features, the entropy may increase, indicating higher uncertainty. However, it is important to note that revealing more sensitive features does not necessarily imply higher prediction accuracy for some individuals. This is obviously important in personalization settings. However, and importantly, in our study, we have empirically observed that revealing more features generally ensures better accuracy across the whole population (see, for instance Figure 6). But designing a personalized algorithm that considers both data leakage and accuracy can be challenging, and this relationship may not hold for every individual. ### Q2: Further explanation of line 190 Absolutely! In our framework, we have to compute the distribution of $\tilde{f}_\theta$ over the uncertainty of certain unrevealed sensitive features. If the distribution of these unrevealed features follows a Gaussian distribution, then the distribution of the soft-label prediction (i.e., $f_\theta(x) = \theta^\top x$) is also linear, or approximately linear in the case of non-linearity. Now, here's where thresholding comes into play. To move from the soft-label prediction to a hard-label prediction, we apply a thresholding operation. Since the soft-label prediction follows a Gaussian distribution, thresholding this Gaussian turns it into a Bernoulli variable. In other words, we are converting a continuous-valued prediction into a binary decision by comparing it to a threshold. The thresholding process is a key step in translating continuous predictions into categorical outcomes, especially when we are dealing with uncertainties tied to unrevealed features. We understand that the concise description in line 190 might have been unclear, and we plan to include a more extended and illustrative description in the final version of the paper to alleviate any confusion. We hope we have addressed all your concerns, and we welcome any further questions or feedback. --- Rebuttal Comment 1.1: Comment: We wanted to reiterate our gratitude for your time and review and would like to check if you had any additional questions or comments. Also, it appears we had a formatting issue in the previous response. It should read: Absolutely! In our framework, we have to compute the distribution of $\tilde{f}_{\theta}$ over the uncertainty of certain unrevealed sensitive features. If the distribution of these unrevealed features follows a Gaussian distribution, then the distribution of the soft-label prediction is also linear, or approximately linear, in the case of non-linearity. Many thanks! --- Rebuttal Comment 1.2: Comment: I appreciate your clarifications and think adding the revisions you've described here will improve the manuscript nicely. Regarding the choice of entropy as a metric, I still find your explanation and justification unsatisfying. If this work aims to "achieve better accuracy for the whole population while minimizing data leakage," then using model certainty or entropy as a proxy for accuracy requires theoretical and/or experimental justification. While you noted general relationships between disclosure and entropy and between disclosure and performance, these trends do not lead to the conclusion that entropy is a sufficient proxy for performance. While as you noted, exploring the relationship between disclosure and performance is difficult, one alternative is to make it clear that this paper focuses on improving **certainty** (not accuracy/performance) while minimizing "data leakage," and to remove claims about improving performance. --- Reply to Comment 1.2.1: Comment: Thank you for your comment. We agree with your suggestion and will make it clearer in the paper that our primary focus is on minimizing data leakage while improving certainty, and, at the same time, ensuring that our claims are consistent with our focus and the evidence provided. Note also that in our work, entropy is used as a link to core feature sets, which, in turn, captures our concept of data leakage, which is of central interest in data minimization. Providing certificates on accuracy would indeed be a useful desiderata and is a topic of future exploration, but, we also note that this is the first work exploring data minimization at inference time and we believe will pave the way to additional significant contributions. Thank you again, we appreciate your support!
Summary: The authors address the problem of data minimization at inference time, which poses a real-world challenge in real-world applications, where users might want to hide sensitive or personal attributes. They provide an efficient algorithm to sequentially determine the appropriate attributes for an individual, with the goal to maintain the original predictive accuracy (based on all attributes). Strengths: 1. The considered problem is important and timely. 2. The proposed theoretical framework introduces some interesting novel concepts. 3. The paper is well motivated and well-written. Weaknesses: 1. The addressed problem is inherently a privacy problem; if some feature values are not explicitly available at inference time, it does not mean that they can not be inferred. Actually, in practice if the predictive quality does not change after hiding some feature values, it is a strong indication that these values can be inferred from the other values. 2. The overall objective of maintaining the original predictive quality with the minimum amount of private features is never explicitly formalized/defined. 3. In many real-world applications that use personal data, accuracy is the worst measure for predictive quality, so focusing on and reporting this measure is not really meaningful (e.g., when the goal is good model calibration on imbalanced classes). 4. Relevant related work has not been considered: I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization T Leemann, M Pawelczyk, CT Eberle, G Kasneci Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Can you comment on the weakness above and limitations below? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The novelty of the work is rather limited, as relevant work on this topic has already addressed similar challenges. The formalization of the approach and the theoretical framework is limited as it mainly aims for maintaining the high accuracy, and it does not consider the true privacy of the features that users wish to hide. The evaluation is also focused on accuracy, which is not a meaningful measure in real-world ML applications based on tabular user data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and review. Below, we report our answers to your questions. Feel free to let us know if there are further questions or concerns and we'll be more than happy to elaborate. ### Comment on `The addressed problem is inherently a privacy problem ... can be inferred from the other values` We appreciate your perspective on the inherent privacy concerns in the problem we've addressed. However, we'd like to clarify that the situation you describe might not necessarily hold in our context. To illustrate our point, let's revisit the motivating example from Section 2 of our paper. In this example, the model's prediction remains invariant with respect to sensitive attributes like _Income_ and _Location_ when _Job_ equals 1. This behavior holds true even if all variables are independent. Thus, in this particular case, the fact that the predictive quality does not change after hiding some feature values does not imply that these values can be inferred from the other values. Given the published _Job_ information, it is ineffective to use the model predictions to infer the original values of _Income_ and _Location_. We understand the broader concern about potential inference from hidden features, and we agree that it's a critical consideration in privacy-related research. However, the specific mechanism we've explored offers safeguards against such inference in the scenarios we've examined. We're open to further discussion if you have additional concerns or insights. ### Q2 formalization of our objective we have indeed described the overall objective of maintaining the original predictive quality with the minimum amount of private features. This description can be found in Line 65 of Section 2. We chose to express this goal through an English description, believing it to be sufficiently clear and avoiding additional mathematical notation that might complicate the reading. However, we are receptive to your feedback and we are certainly open to incorporate a formal definition upon paper acceptance. ### Q3 Choice of accuracy metric We acknowledge that accuracy may not be suitable for all real-world applications, however, in the context of our work, we adopted accuracy as the evaluation metric for specific reasons. - First, we considered both binary and multiclass classification problems, making AUC less applicable in our situation. This decision is elaborated further in the Appendix. - Second, and more crucially, our work emphasizes obtaining the model's hard prediction, where confidence can be assessed with 100% certainty even when some sensitive features remain unrevealed. This scenario differs fundamentally from soft-label prediction, where a full confidence estimation of the prediction score is impossible if any features are undisclosed. In essence, our focus on hard-label prediction, as opposed to soft-label prediction, requires users to reveal less sensitive information. This approach is aligned with the concept of minimum uncertainty, allowing us to achieve the highest confidence in our predictions. We of course are open to considering other suitable metrics and appreciate your insights, but we also hope this explanation clarifies our choice of accuracy as the evaluation metric. ### Q4 related work Thank you for bringing the paper by Leemann et al. to our attention. We are aware of this work. It focuses on the privacy concerns related to opting out from data collection for individuals who choose not to share certain information. While the subject of privacy is a common theme between our work and the paper you mentioned, the direction and approach taken in our research are quite distinct. In our study, we are primarily concerned with minimizing data leakage by revealing the least amount of sensitive features without affecting prediction accuracy. Although the connection between the two works is somewhat tangential, we will consider adding a reference to this paper in the final version of our work for completeness and to acknowledge the broader context of privacy research. We appreciate your feedback and hope you could reconsider your score, in light of our responses to your questions. --- Rebuttal Comment 1.1: Comment: We wanted to reiterate our gratitude for your time and review and would like to check if you had any additional questions or comments. Many thanks! --- Rebuttal Comment 1.2: Title: Thank you for your reply! Comment: Thank you for your reply. Please find my answers below. Q1: “it is ineffective to use the model predictions to infer the original values of Income and Location“ ineffective doesn’t mean impossible. If it is possible to infer the original values of the sensitive values, your approach misses the motivational point, because what is the point of identifying the “appropriate attributes for each individual to provide“ if it is not privacy? Q2: A mathematical formulation of the objective would certainly be less ambiguous. Q3: I find it quite impractical to predict the hard labels directly. But if you decide to do so, you can still conduct a precision-recall analysis (in addition to accuracy). For multi-class cases, you can consider micro and macro averages. Q4: I think broadening the spectrum of different contexts (and related approaches) in which users would like to hide or provide sensitive information would greatly benefit the current work. For now, I will keep my initial score. --- Reply to Comment 1.2.1: Comment: Thanks for answering our rebuttal. **Q1**: We urge you to review again the motivating example in Figure 1. We believe this will clarify your question. In that example, it is impossible to infer the sensitive attributes (location, Income) when Job, Loc, and Inc are mutually independent, i.e., $\Pr(\text{Loc}, \text{Inc}) = \Pr(\text{Loc}, \text{Inc} | \text{Job})$. However, if we observe $\text{Job} =1$, we know for sure $1* \text{Job} - 0.5 * \text{Loc} + 0.5 * \text{Inc} >= 1*1 - 0.5 * \text{Loc} + 0.5 * \text{Inc} >=0$ for any arbitrary values of Loc, Inc. Hence, we know the model prediction (**not the ground truth**) of that individual even though have not observed their Inc and Loc feature values. We notice that this reviewer has mentioned earlier the paper "I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization T Leemann, M Pawelczyk, CT Eberle, G Kasneci". Our setting, motivation, and privacy notion are certainly different from those in such a paper. In our context, users are not given the option to choose which features to release; instead, entities such as the system, the bank, or the insurance company make that decision. The term "appropriate attributes" refers to specific features that the system believes, if released, can provide the most insight or additional information regarding the model's predictions. When all features are independent, the model will of course select the most important one. However, note that we focus on minimizing data at inference time, and the determination of the "most important" features is based on the training data, which is publicly available in our setting. Therefore, there is no privacy loss incurred in this process, within the setting of our paper. We will certainly take your other points into consideration. Thank you for the feedback! From your response, it also appears that we may have clarified all your concerns. Let us know if this is not the case and again we hope you could reconsider your score, in light of our responses to your questions. --- Reply to Comment 1.2.2: Comment: **Q3**: Hard labels are particularly useful in scenarios where discrete decisions are required by organizations such as banks or insurance companies. For instance, they may be used to determine whether a user qualifies for a loan. Consider the motivating example of a logistic regression model given by $1 \times \text{Job} - 0.5 \times \text{Loc} + 0.5 \times \text{Inc} \geq 0$. If some users have their Job value of 1, we can confidently predict that their hard label will always be 1. However, their soft label, or prediction score, depends on the unrevealed attributes Inc and Loc. For example, user A with Job = 1, Loc = -1, and Inc = 1 will have a score of $ \frac{1}{1 + \exp(-2)}$, while user B with Job = 1, Loc = 1, and Inc = -1 will have a score of $ \frac{1}{1 + \exp(0)}$. By estimating only hard labels, users can reveal fewer sensitive features without compromising the model's accuracy. This approach not only minimizes privacy but also reduces the burden on the bank or institution and saves time for the users that have to provide their data. This efficiency comes at the cost of providing less information, as no soft labels or prediction scores are given. This trade-off aligns with the well-known principle of the 'no free lunch' theorem.
Summary: This paper considers the problem of data minimization at inference time. Consider a set of features X which consists of public features Xp, and private features X\Xp. A model has been trained with all the features X, i.e., f(X). Now, the goal is to allow for inference revealing only a subset of the private features Xr and keep some of them unrevealed Xu. In essence, X=Xp U Xr U Xu. Strengths: The paper introduces a new and interesting problem, i.e., data minimization during inference time. They provide entropy-based measures to decide whether to include a feature or not. They also provide theoretical guarantees and experimental results. Weaknesses: While the authors say this is the "first" work to do so, I think there are substantial similarities with the problem of feature selection or feature engineering as well as inference under missing data. The differences should be made clear with references. The use of the term leakage can be a bit confusing since leakage is used here for the number of features revealed. But of course, the features can just be random noise and hence not leak anything, whereas one feature can also be much informative of everything and cause leakage. I think a different terminology should be used here since leakage sounds more like "information" rather than "number of features". My most important concern: How does this approach compare with applying a local explanation framework and dropping the least important private features? For example, for each user, if you apply SHAP or LIME or even feature attributions, and then drop the private features with the least contribution for that local user. What performance would you get and how would that compare to your method? I would be happy to increase my rating based on a discussion/comparison with just applying existing local explanations for this problem statement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses COMMENT: After rebuttal, I increased my score by 2 points. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses (last point on my major concern) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and review. Below, we report our answers to your questions. Feel free to let us know if there are further questions or concerns and we'll be more than happy to elaborate. ### Q1 Thank you for pointing out the apparent similarities between our work and existing problems in feature selection, feature engineering, and inference under missing data. We acknowledge that these are relevant domains, and a thorough understanding of how our work differs is crucial. *We have indeed addressed these differences in Section B (Related Work) of the Appendix*, where we detailed the distinctions between our approach and prior works. The space constraints in the main text prevented us from including this comparison there, but we ensured that the appendix provides a comprehensive examination. Should you require more specific insights or have any particular concerns about these comparisons, please let us know, and we'll be glad to address them. ### Q2 Thank you for highlighting the potential confusion surrounding our use of the term "leakage" to describe the number of features revealed. Your observation about the possibility of one feature being highly informative, or features being mere random noise, is insightful. In our work, we specifically focused on the concept of leakage as the percentage of sensitive features a user needs to reveal. This notion aligns with privacy policies on data minimization, such as GDPR, and we found it to be an interpretable measure in our context. In all the datasets we explored, the corner case you mentioned did not occur. However, we do acknowledge your concern and agree that considering alternative terminology could help clarify our intentions. Additionally, the idea of reporting a noisy version of a sensitive feature as a privacy precaution is worth further exploration. Your feedback has prompted us to reflect on our terminology, and we appreciate the opportunity to clarify our approach. We will take into account your suggestion in our revision. ### Q3 This is a great question. In our specific context of data minimization, applying SHAP or LIME to each individual would necessitate the revelation of *ALL* features at test time in order to quantify the contribution of each feature. This requirement contrasts with our goal of minimizing the number of sensitive features released at test time. Our method is aligned with preserving privacy by controlling the information revealed, and we've designed it to consistently improve over methods that selects features based on their importance (evaluated on the entire population). Our results across different datasets substantiate this improvement. We appreciate this thoughtful feedback, and it will certainly guide our further analysis and discussions in future revisions. We also hope you could reconsider your score, in light of our responses to your questions. --- Rebuttal Comment 1.1: Comment: We wanted to reiterate our gratitude for your time and review and would like to check if you had any additional questions or comments. Many thanks!
Summary: The authors propose that in a large number of application of machine learning reasonable model accuracy can be realized without the model having access to the entire feature set. This has implications for privacy and data-sharing, as a model that adaptively selects features to solicit would retain model performance even when minimal sensitive data is revealed to the model. Strengths: * The motivation for the paper is clear and important. The ability to reduce the amount of disclosed data is appealing and a thoughtful, timely contribution. * The authors attempt to address obvious questions that may arise, specifically around the relationship between the input features being non-linear. Weaknesses: * the empirical evaluation of the model is somewhat weak. The authors do present results in terms of accuracy, however, there is no discussion as to what is the source of performance degradation, if any. It would be nice to see a more holistic evaluation that takes into account other aspects of the classifier performance, including ROC performance and calibration. * For linear classifiers, computing the argmax over the estimated entropy is easy as it factorizes over the individual dimensions of the core set. However, this may not be the case when the relationship between features is non-linear. Can the authors comment on this. Specifically can the presented algorithm still be tractable if higher-order interactions between core-set of features considered? * The current evaluation pipeline randomly assign attributes as un disclosed at random. I would recommend pre-specifying the disclosure rate based on certain notion of harm revelation of that feature would cause. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The authors proposed a feature sampling strategy based on greedy selection on the Exected Information Gain. Such a strategy has shown benefit in other areas of machine learning especially in active learning. There could be potentially many other well motivated acquisition functions in the context of the problem the authors present. Have the authors considered other such possibilities? If yes, can the authors comment as to why Expected Information Gain (Entropy) was selected as the most reasonable choice. The presented algorithm in its current stage does a one step-greedy lookahead which would work when the relationship between features is linear. Can the authors comment on tractability when looking at the power set of all potentia features that make the coreset? I am willing to tweak my scores based on engagement with the authors and other reviewers in the discussion phase. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are issues with model safety and trust when a model adaptively solicits information from a user. Specifically, can the authors comment on how such an approach might effect users trust in the system when the model seeks disparate information from disparate users? I am not looking for a theoretical argument here, but more of a value judgement based discussion of potential implications of disparate treatment under such a model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and review! If there are further questions or concerns and we'll be more than happy to elaborate. ### Empirical evaluation We appreciate the suggestion to include a more holistic view of the classifier's performance. Below, we address your concerns and provide additional insights. 1. **Trade-off between Data Leakage and Accuracy:** We indeed discuss the trade-off between data leakage (the number of features a user needs to reveal) and model accuracy in Section 3. Our approach is tailored to the user's willingness to reveal more or fewer features, leading to varying levels of accuracy. We believe this aspect of our work offers a nuanced understanding of how information revelation impacts predictions. 2. **Choice of Evaluation Metric:** We chose accuracy over the AUC metric for two main reasons: - AUC is generally designed for binary classification tasks. In our work, as elaborated in the Appendix, we also tackle multiclass classification problems. - Our primary concern is to obtain the model's hard prediction, which can be estimated with 100% confidence (delta = 0) even when not all sensitive features are revealed. This contrasts with soft-label prediction, where 100% confidence is impossible if some features remain unrevealed. Hard-label prediction requires users to reveal less sensitive information than soft-label prediction when considering minimum uncertainty or aiming for 100% confidence. 3. **Additional AUC Analysis:** We understand the importance of the AUC metric, and to address your concern, we have evaluated our F-score (with $\delta=0.2$) method against the Random and Feature Importance methods on Bank data. The results, shown in the table below, indicate similar AUC values to the baseline Importance, with the added benefit that users need to reveal fewer features (as detailed at the bottom of Figure 6). | m | F-score | Random | Importance | |------|---------|--------|------------| | 4 | 0.874 | 0.871 | 0.874 | | 5 | 0.874 | 0.872 | 0.875 | | 6 | 0.860 | 0.854 | 0.860 | ### Entropy computation We recognize that the complexity of computing the argmax over the estimated entropy becomes more challenging when we move from linear to non-linear models. Specifically, the ease of factorization over individual dimensions, as found in linear classifiers, no longer applies in the non-linear context. In our exploration, we observed that non-linear models do indeed incur more computational overhead but this increase in complexity is a trade-off we accept for the following reasons: - _Increased Accuracy_: Non-linear models are often capable of capturing more intricate relations within the data, leading to higher accuracy in general compared to linear ones. - _Higher-Order Interactions_: Despite the added complexity, our algorithm's ability to consider higher-order interactions between core-set features opens the door to more sophisticated and nuanced modeling. This approach may allow us to uncover relationships that linear models cannot. We believe that the potential gains in modeling capability justify this trade-off. ### Evaluation pipeline We agree that sensitivity varies by application and cultural context, and expert knowledge is often needed to make these determinations. For example, the disclosure of political orientation might have different implications in different regions. In our current evaluation, we opted for random assignment to maintain generality. However, we acknowledge that your suggestion could lead to a more nuanced evaluation. Your insight is appreciated, and we're open to further dialogue on this matter. ### Q1 (Expected information gain) Thank you for this insightful question. Indeed, we explored several alternative methods for our feature sampling strategy, including **(1)** an importance-based approach that prioritizes revealing more critical features first, and **(2)** a strategy based on the uncertainty of a feature given all revealed features. Our work compares the former in the main text and appendix. Note that approaches such as uncertainty score, do not account for the model's prediction, limiting their effectiveness, while the Expected Information Gain is able to capture the benefit of revealing a particular feature in the context of our problem. We believe that our choice, backed by its proven efficacy in areas like active learning, represents a well-motivated and robust solution for the data minimization context, but we remain open to exploring other methods. ### Q2 (Algorithm's details) As we elaborated in the main text (Lines 311 to 313), we did compare our proposed model with an _Optimal_ method that considers all possible subsets to choose the smallest coreset. Unfortunately, this exhaustive approach is not practical. Additionally, it poses a significant challenge in that it requires users to reveal all features at testing time, which is against the core motivation of this work. Our proposed strategy, in contrast, offers a more tractable solution, particularly when the relationship between features is linear, balancing efficiency with effectiveness. ### Q3 (Fairness) You raise a great point regarding trust and disparate treatment. While our current work primarily focuses on the trade-off between accuracy and data leakage, we recognize the importance of fairness and the potential implications of unequal information solicitation across different demographic groups. Indeed, this is subject of our ongoing investigation. Ensuring fairness by equalizing the amount of information revealed across groups is a promising direction, and it's one that we're actively exploring in our current work. We fully acknowledge that a thorough investigation of these ethical and social dimensions is needed. We hope these clarifications adequately address your concerns. We are committed to making any necessary adjustments and look forward to any further suggestions you may have. --- Rebuttal Comment 1.1: Comment: We wanted to reiterate our gratitude for your time and review and would like to check if you had any additional questions or comments. Many thanks!
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The work presents a method to increase privacy of ML predictions at test time by asking users to reveal fewer features. The proposed method asks for features that are maximally informative of the prediction outcome given the features seen thus far and stops when the prediction outcome can be decided. The method addresses the case of linear as well as non-linear thresholded classifiers assuming Gaussian-distributed features and a local Taylor approximation of predicted label probability. Experiments on multiple real datasets show reduction in data required for similar accuracy as baselines. Strengths: Writing is clear. The method is explained clearly step-by-step. I like the presentation of the results in the plots. Method is simple and is shown to work on real datasets. The idea of selecting maximally informative features until decision is reached is natural. The idea is executed well. Problem of reducing data required for a prediction is important to increase user privacy, so the method has practical significance. Weaknesses: On the writing, some algorithmic details can be improved like mentioning the entropy calculation and calculating the core features sets for \delta>0. Core feature sets can be defined more rigorously by mentioning the sources of randomness in the probability expression. Results from datasets other than Credit, which are presented in Appendix, can be summarised in the main text. Reasons for the effectiveness of the method is not clear to me and are not sufficiently explored in experiments. This is needed given the success of simplifying assumptions on Gaussian distribution of features. Is this because of the dataset or model characteristics, or is due to the algorithm? Baselines such as removing all sensitive features might help check if dataset characteristics are the main contributor. An analysis of the examples which are predicted correctly without revealing features can shed more light. Some related work is missing. See detailed remarks. The technical contributions in light of this work is unclear. Theoretical analysis leaves many questions open e.g. the impact of approximations, what does optimal procedures look like, how much training data is needed to get significant privacy gains, how to extend the method to non-linear classifiers and high-dimensional data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Notation in Definition 1 is unclear. What does the probability of prediction f_\theta mean for unobserved variables X_U? Do we take expectation over different values of X_U? An example of the meaning of this probability would help me. Similarly, how are the unrevealed variables in entropy calculations, e.g. in term A of equation (3), handled? Suppose we remove all 5 of the sensitive features, does this have significantly lower accuracy? That is, does choosing the sensitive features matter at all for prediction. How is entropy computed in equation 4? Please describe the Bayesian model used for the data. Consider adding the related work on following topics and discuss whether these are applicable to the problem setting. Active measurement of features e.g. Li and Oliva 2021 ‘Active Feature Acquisition with Generative Surrogate Models’ http://proceedings.mlr.press/v139/li21p.html Dynamic measurement of features in time e.g. Chang et al. 2019 ‘Dynamic Measurement Scheduling for Event Forecasting using Deep RL’ https://proceedings.mlr.press/v97/chang19a.html Feature pruning for causal effect estimation e.g. Makar et al. 2019 ‘A Distillation Approach to Data Efficient Individual Treatment Effect Estimation’ https://ojs.aaai.org/index.php/AAAI/article/view/4375 --- After the rebuttal My remaining concern relates to the second question above. Reasons for the success of the method are not clear from the experiments and how they are presented. For instance, seeing the accuracy of a baseline which removes all sensitive features (minimum data leakage) will help contextualize all line plots (e.g. percentage improvement from such a baseline can be the y-axis). This helps answer how much of the success is due to the method's feature selection versus predictability of the public features in the datasets. Further, a detailed analysis of the examples that are predicted correctly with < 2 sensitive features will be instructive (e.g. looking at whether these are the same set of features or personalized to the data point). This said I appreciate baselines included by authors to check the selection criteria and experiments to check effect of linearity, which are also required for a newly proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations on modeling assumptions and theoretical analysis are acknowledged. ## Minor comments, no response is aspected Line 4 of the Algorithm 1 to check core features should be explained in detail for the case of \delta>0. Please specify the assumption that input features are jointly Gaussian more prominently. e.g. in an Assumption environment, in the lines 177-180. Please cite information processing inequality in Proposition 2 in main text. Propositions 2 to 5 and Theorem 1 are known results so they can be mentioned in text or denoted as lemmas with references. The statement in line 174 does not require pointing to Proposition 1. It holds because of the definition of entropy. Please provide guidelines on how to use the Gaussian approximation for categorical features. Introduction is nicely written. However, the goal / objective of the paper is repeated multiple times which can be removed to be concise. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and review! Below, we report our answers to your questions. Feel free to let us know if there are further questions or concerns. and we'll be more than happy to elaborate. We also hope that you consider updating your score, if these replies answer your questions. **Q1: Notation**: In our context, since $X_U$ represents unobserved variables, we treat them as multivariate random variables following the conditional distribution $ P(X_U ∣ X_R=x_R) $. The model prediction $f_\theta$ is inherently a function of $X_U$ and, therefore, is also considered a random variable. To clarify with an example, if $P(X_U | X_R = x_R) = 0.8$, this implies that for all samples drawn from the distribution $x_U \sim P(X_U | X_R = x_R)$, 80% of the time you will observe the model prediction $f(X_R = x_R, X_U = x_U)$ resulting in a value of 1, and 20% of the time, the prediction will be 0. The unrevealed variables in entropy calculations, such as in term A of equation (3), are handled through similar probabilistic reasoning, where the conditional probabilities are leveraged to account for the uncertainty associated with these variables. We hope this clears up any confusion regarding the notation and handling of unobserved variables. **Q2: Clarification**: The impact of removing the 5 sensitive features depends on their importance relative to public features. If they're significant contributors, their removal could notably affect accuracy. In our specific experiments, all features contributed to prediction. Thus, designating some as sensitive and excluding them could lead to a drop in accuracy. The choice of sensitive features indeed matters in our case. **Q3: Entropy computation**: The entropy in Equation (4) quantifies the uncertainty in the model’s prediction for unobserved sensitive features $X_U$. It's computed by estimating the distribution $f(X_R = x_R, X_U)$ of the model responses with revealed variables $X_R$, which requires estimating the conditional distribution $P(X_U | X_R = x_R) from the training data. This can be done efficiently under a Gaussian assumption or less efficiently with a Bayesian neural network model. We detail the Bayesian model used to estimate these conditional densities from Line 198 to Line 205, adhering to standard Bayesian neural network training and inference. **Q4: Related Work**: We appreciate the reviewer for highlighting relevant works related to active measurement, dynamic measurement, and feature pruning. Please notice that we reported a detailed discussion of related work and connection with differential privacy, feature selection, and active learning, in Appendix B. Among the mentioned papers, we find the work by Li and Oliva (2021) to be the most closely aligned with our research. However, there are distinct differences. In our study, as demonstrated in the motivating example of Section 2 and the testing core feature set in Section 5.2, we show that users do not need to reveal all sensitive features to obtain a hard-label prediction with 100% confidence. Additionally, we provide an efficient algorithm that determines whether the current set of revealed features can ascertain the value of the model prediction with complete certainty. As for the other mentioned works, Chang et al. (2019) and Makar et al. (2019), we find their topics relevant but only tangentially connected to our problem setting. We assure you that we will include these related works and the nuanced discussions in our revised paper. Thank you for pointing us toward these resources. **Other comments**: Thank you for the detailed and insightful feedback. We sincerely appreciate your comments and will take them into account as we revise our paper. Regarding the use of categorical features, it is possible to employ a Bayesian network to estimate $P(X_U | X_R = x_R)$. However, it is essential to recognize that learning a Bayesian network can be a slow process, particularly when dealing with high-dimensional data. We would like to clarify that this challenge is not specific to our work but is a general concern in conditional density modeling of multivariate variables. The contribution of our work falls obviously beyond these constraints. We hope this addresses your concern, and we welcome any further questions or feedback. --- Rebuttal Comment 1.1: Title: Discussion Comment: We wanted to reiterate our gratitude for your time and review and would like to check if you had any additional questions or comments. Many thanks! --- Rebuttal Comment 1.2: Title: After the rebuttal Comment: I thank the authors for providing a detailed response to my questions. Most of my concerns except on evaluation are addressed. I increased my score to 6. Weak Accept. Overall I am more positive of the paper due to its contribution as defining the feature selection problem in a new context (privacy, test-time feature selection), and the simplicity and effectiveness of the method. My remaining concern relates to Q2 in the rebuttal. Reasons for the success of the method are not clear from the experiments and how they are presented. For instance, seeing the accuracy of a baseline which removes all sensitive features (minimum data leakage) will help contextualize all line plots (e.g. percentage improvement from such a baseline can be the y-axis). This helps answer how much of the success is due to the method's feature selection versus predictability of the public features in the datasets. Further, a detailed analysis of the examples that are predicted correctly with < 2 sensitive features will be instructive (e.g. looking at whether these are the same set of features or personalized to the data point). This said I appreciate baselines included by authors to check the selection criteria and experiments to check effect of linearity, which are also required for a newly proposed method. My concerns on entropy calculation, and the notation are addressed -> I would suggest explicitly naming the standard Bayesian techniques (e.g. from [10]) used in the implementation as it is an important detail. I would also suggest including the clarification on the random variable f_theta in the Notation paragraph since the notation is otherwise ambiguous. Related work can be more detailed by discussing the feature acquisition literature. --- Reply to Comment 1.2.1: Comment: Thank you for the positive assessment and for recognizing the novelty and significance of our work in terms of defining the concept of data minimization at inference time and its relation with privacy. Let us provide some additional details on our assessment: Firstly, we'd like to assure you that we have made a meticulous exploration of our results. This indeed included an analysis of instances that are accurately predicted with a minimal number of sensitive features. Even when restricted to k=1, we observed that the selected features are **not** uniform across the different users. We recognize the significance of this observation (thank you for your suggestion!) and will provide a more detailed explanation in the final version of our paper. Next, we agree with your recommendation to present our findings in the context of a minimum data leakage baseline. We have indeed conducted an evaluation under such conditions and it revealed a substantial decline in accuracy. We'll detail it in our revised manuscript, further substantiating our method's efficacy. Once again, thank you for your constructive feedback. We hope this addresses your last concern and that you could further champion our work.
null
null
null
null
null
null
Context-guided Embedding Adaptation for Effective Topic Modeling in Low-Resource Regimes
Accept (poster)
Summary: This paper proposes a new solution (Meta-CETM) for inferring topics on a dataset with a few available documents only. The main idea is to train the model on various tasks and then use it on a new, small dataset. The are extensive experiments on diverse topic models, in particular in this context of "few shot" learning, and the results show that Meta-CETM outperforms its competitors. Strengths: - few-shot learning for topic modeling is an important topic today - the model looks sound (even though not really well explained) - the experimental framework is quite strong, with good results for the proposed solution Weaknesses: - The solution is not always well presented - It's a quite complicated model with many intertwined modules Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have some concerns on this paper. - The paper is not always well presented. Here I give some examples: * The problem formulation (2.1) is quite confusing, probably because of the limited space. This is much clearer in the supplementary material. Some aspects are not sufficiently described or motivated. For instance, the authors seem to use a self attention mechanism similar to the one in the Transformer (noted Attn). However, this mechanism is normally based on an embedding matrix, which is not the case here for X is a simple BoW representation of the documents. * Why using the Weibull distribution in this context? There is no explanation at all. * What "latent indication" means for H(i)? * I guess the step 1.c line 114 is done after having computed all the word embedding of the generated document (we need all the Z(i)). Is it true? If so, this step should be taken outside the for loop. - Would it be possible to use the adaptive word embedding module (Z(i)) for the bag-of-word encoder? - The experimental part of the paper looks quite strong. Important solutions for topic modeling (old and new) are considered in the competitors and the authors use several manners to compete with them (perplexity, classification, topic coherence and diversity, qualitative evaluation). However, I would expect more experiments on the few-shot context, for instance by varying the number of available documents (not only 5 or 10) and taking into account the semantic difference between the corpora (i.e., between the training set and the test set). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is not a single limitation mentioned in the paper... There is no future work as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and helpful comments and suggestions. Your concerns have been addressed as follows. **Q1:** The paper is not always well presented. Here I give some examples: - *Q1.1:* The problem formulation (2.1) is quite confusing ... For instance, the authors seem to use a self-attention mechanism similar to the one in the Transformer (noted Attn). However, this mechanism is normally based on an embedding matrix, which is not the case here for X is a simple BoW representation of the documents. *A1.1:* First, we would like to clarify that some parts, including the notation table, training and testing algorithms, illustration of our settings, and some derivations, are put into the supplementary materials due to the space limit. Secondly, we explain your questioning of the self-attention mechanism. We must apologize for the mistake in Eq. 5 due to our carelessness. Indeed, our self-attention operation does perform on an embedding matrix $\mathbf{H}^{(i)}\in \mathbb{R}^{D \times J}$, which is obtained by feeding BoW representations $\mathbf{X}^{(i)}\in \mathbb{R}^{V\times J}$ of the documents into a multi-layer perceptron (MLP), where $V$ is the vocabulary size, $J$ is the number of documents in each task, and $D$ is the dimension of the extracted BoW features. Finally, we apply self-attention mechanism $\mathrm{Attn}()$ to aggregate the information of $J$ document features to infer the posterior of context variable $\pmb{c}^{(i)}\in \mathbb{R}^{K}$. Hence, the correct version of Eq. (5) should be written as follows (we will fix this error in the revision): $$ q(\pmb{c}^{(i)}|\mathbf{X}^{(i)}) = \mathcal{N}(\pmb{\mu}\_{\pmb{c}^{(i)}}, \pmb{\Sigma}\_{\pmb{c}^{(i)}}); \pmb{\mu}\_{\pmb{c}^{(i)}}, \pmb{\Sigma}\_{\pmb{c}^{(i)}} =\mathrm{Attn}(\mathbf{H}^{(i)}); \mathbf{H}^{(i)}=\mathrm{MLP}(\mathbf{X}^{(i)}).$$ - *Q1.2:* Why using the Weibull distribution in this context? There is no explanation at all. *A1.2:* Sorry, we do not find any part of our model involved in the Weibull distribution. But one of our compared works, Meta-SawETM [1], utilizes the Weibull distribution for posterior approximation. - *Q1.3:* What "latent indication" means for H(i)? *A1.3:* Please refer to our answer to *Q1.1*, we call the embedding matrix $\mathbf{H}^{(i)}\in \mathbb{R}^{D \times J}$ as "latent indication", which stands for the extracted deterministic features of BoW representations. - *Q1.4:* I guess the step 1.c line 114 is done after having computed all the word embedding of the generated document (we need all the Z(i)). Is it true? If so, this step should be taken outside the for loop. *A1.4:* Yeah. You are correct in your understanding, and step 1.c (line 114) should be taken outside the for loop. Thank you for pointing out this error, and we will fix it in the revision. **Q2:** Would it be possible to use the adaptive word embedding module (Z(i)) for the bag-of-word encoder? **A2:** The question you posed is a bit confusing to us. For most neural topic models, the BoW encoder is typically used to extract document representations (or topic proportions) based on the word frequencies, while the adaptive word embeddings $\mathbf{Z}^{(i)}$ in our paper are derived by modeling a task-specific semantic graph with a variational GAE. We do not understand exactly what it means to use the adaptive word embedding module for the bag-of-word encoder. Do you mean directly using the variational GAE combined with the semantic graph to learn the document representations (or topic proportions)? **Q3:** The experimental part of the paper looks quite strong. Important solutions for topic modeling (old and new) are considered ... However, I would expect more experiments on the few-shot context, for instance, by varying the number of available documents (not only 5 or 10) and taking into account the semantic difference between the corpora (i.e., between the training set and the test set). **A3:** For varying the number of available documents for each task, we conduct additional experiments on all four datasets with \{20, 50, 100\} documents in each task and list the perplexity results of different compared methods in the **Author Rebuttal by Authors** part. As for taking the semantic difference between the training set and the test set into account, we have not performed such experiments due to the time limit, but we will leave it as a priority for our future work. [1] Bayesian deep embedding topic meta-learner. In ICML 2022. --- Rebuttal Comment 1.1: Title: All is ok for me Comment: I've read the other reviews and the rebuttal. For A1.2, you're right: I was confused and thought you used something from another paper, so it's all good to me. For A2, I better understand now that you've two types of information, so you cannot use one straight in replacement of the other, so it's ok as well. Thank you for the additional experiments which confirms the value of your work. For me it's still an "accept". --- Reply to Comment 1.1.1: Comment: We genuinely appreciate your recognition of our work! We will make the best effort to improve the presentation of our paper. Best regards.
Summary: This paper proposed an approach for few-shot topic modeling. The authors first question the limitations of "static word embeddings" in previous related work when transferring to new tasks, and then propose to use the "adaptive word embeddings" generated by VGAE to address this issue. Although the problem this paper aims to solve is meaningful, I think the description of the motivation is not clear enough. Even though the experimental results are acceptable, the methods used do not reflect the motivation of the paper. Strengths: - Few-shot learning for topic modeling is a meaningful problem. - By combining VGAE with the neural topic model, fairly good experimental results were achieved. Weaknesses:  This paper aims to address the limitations of "static word embeddings". In line 9, the authors claim that 'we introduce a variational graph autoencoder to learn task-specific word embeddings based on the dependency graph refined from the context of each task'. However, why would the dependency graph and VGAE help in learning task-specific word embeddings? If this claim holds true, simply using the context of each task could achieve the same purpose. In other words, I don't see how VGAE and the dependency graph would aid in learning the so-called 'adaptive word embeddings'. The authors should carefully explain why the dependency graph and VGAE can reflect the characteristics of each task. Would the dependency graph for each task be fundamentally different? In summary, I find this motivation unconvincing.  In line 14, the authors state that the Gaussian mixture prior can "facilitate the discovery of diverse topics and the quick adaptation to novel tasks." However, why using a Gaussian mixture prior can help the fast adaptation to novel tasks? What advantages does it have over the methods used in existing neural topic models? The authors should explain the principles that make the Gaussian mixture prior to work, that is, what the motivation is for using a Gaussian mixture prior, rather than vaguely stating that it can help with adaptation. I agree that the authors provide a perspective on learning topics through clustering by using a Gaussian mixture prior, but the motivation they explained is not convincing.  Some sentences are overly long, making this paper hard to follow. For instance: Lines 24-27.  In lines 57-66, the author's description of the contributions of this paper. The first and the second points are repetitive.  In lines 45-46, "it is experimentally found ...". This statement is vague and confusing. If it's a finding from the experiments conducted in this paper, then the results should be presented. If it's an experimental finding from previous work, then a reference should be provided.  According to the introduction and related work, this paper is an important existing work on few-shot topic modeling: "Few-shot learning for topic modeling". So why did the authors not compare the methods of this work in their experiments? The authors should give a reason.  What does ϕ stand for in Equation 10? This paper does not give an explanation.  The author demonstrated the effectiveness of the proposed method in the experiment. However, I believe there are some flaws in the experiment. Firstly, since the author believes that VGAE is the main reason for acquiring "adaptive word embeddings", then the visualization results of the model without VGAE should also be provided in Figure 3, in order to prove the role of VGAE in learning "adaptive word embeddings", which is a major motivation of this paper.  Secondly, the ablation study is not comprehensive enough. On the one hand, the author should provide the results on all four datasets. On the other hand, the author should also provide the results of separately removing the context variable, graph vae, and GMM prior, in order to clearly prove the effectiveness of each module proposed in this work. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the Weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your careful consideration and valuable comments. In the following, we are going to try our best to address your concerns. **W1:** This paper aims to address the limitations ... The authors should carefully explain why the dependency graph and VGAE can reflect the characteristics of each task. Would the dependency graph for each task be fundamentally different? In summary, I find this motivation unconvincing. **R1:** Above all, we would like to clarify that our method is able to learn "adaptive word embeddings" because we utilize contextual information that is closely related to each task. In our problem setup, each task consists of only a handful of documents from a specific domain, which could lead to a world of differences between the contexts of different tasks. For instance, in a task related to "hardware", the context of the word **bus** is most likely to cover the words "data", "transmitted", "cache", and so on. Whereas in a task associated with "autos", the words that most often co-occur with the word **bus** are probably "car", "taxi", "passenger", etc. An illustrated example can be found in our Supplementary Materials. We believe that this contextual information is very instrumental in capturing the precise meanings of words, so we refine it into a dependency graph and are able to learn "adaptive word embeddings" with the help of the VGAE. In other words, the dependency graph incorporating task-specific contextual information can reflect the characteristics of each task. **W2:** In line 14, the authors state that the Gaussian mixture prior can ... However, why using a Gaussian mixture prior can help the fast adaptation to novel tasks? What advantages does it have over the methods used in existing neural topic models? The authors should explain the principles ... but the motivation they explained is not convincing. **R2:** To explain clearly the motivation for using a GMM prior, we address two key questions. **1)** why do we use a variational graph autoencoder instead of a graph autoencoder? **2)** why do we adopt a Gaussian mixture prior instead of a standard normal prior? Actually, all of these choices serve one purpose, *i.e.,* to alleviate the issue of "*topic collapsing*". We found that if the learning of topic embeddings is not constrained, it will lead to highly repetitive topics (the content of all topics tends to be the same). Therefore, we use a variational GAE in the hope of regularizing the word latent space. Then we use Gaussian embeddings to represent topics to model the uncertainty, with a standard normal distribution as the prior. Even so, we found the learned topics are not diverse enough; thus, we rely on the GMM prior to further overcome the problem. **Please refer to our newly added one-page PDF for corresponding visualization and numerical results.** **W3:** Some sentences are overly long, making this paper hard to follow. For instance ... If it's an experimental finding from previous work, then a reference should be provided. **R3**: We agree with you about these flaws in writing and expression, and we will improve our presentation in the revision to make it easier to follow. **W4:** According to the introduction and related work, this paper is an important existing work on few-shot topic modeling: "Few-shot learning for topic modeling". So why did the authors not compare the methods of this work in their experiments? The authors should give a reason. **R4:** We found that the authors of "Few-Shot Learning for Topic Modeling" did not open-source their code. Indeed, we have also tried to contact the authors by email but did not get any response, so we do not compare the method of this work in our experiments. However, we have compared a more recently developed approach with strong performance – "Bayesian Deep Embedding Topic Meta-Learner". **W5:** What does $\phi$ stand for in Equation 10? This paper does not give an explanation. **R5:** We apologize for our carelessness. $\phi$ in Eq. 10 stands for the topic-word matrix, which is actually denoted by $\boldsymbol{\beta}$ in our method. **W6:** The author demonstrated the effectiveness of the proposed method in the experiment. However, I believe there are some flaws in the experiment. Firstly ... to prove the role of VGAE in learning "adaptive word embeddings", which is a major motivation of this paper. **R6:** We think there may be some misunderstanding about the role of VGAE. Indeed, the task-specific dependency graph is the main reason for acquiring "adaptive word embeddings" (refer to our response to **W1**). And VGAE is a mapping function that serves as a bridge. If we remove the VGAE module from our model, then contextual information relevant to each task will not be exploited, as the dependency graph is the input to VGAE. However, introducing additional contextual information closely related to each task is precisely the biggest innovation and contribution of our method. If we do not use the dependency graph and only rely on the BoWs of the documents to learn our model, it should perform comparably to the baseline method ETM. Since there is no information increment in this case, the only difference between our model and ETM is the consideration of one more context variable. **W7:** Secondly, the ablation study is not comprehensive enough. On the one hand, the author should provide the results ... in order to clearly prove the effectiveness of each module proposed in this work. **R7**: As you mentioned that the ablation study is not comprehensive enough, we have further perfected the ablation study, and the corresponding results are exhibited in our newly added one-page PDF. Please note that the results we report for separately removing the graph vae do not mean that we have completely discarded the VGAE module but have replaced it with a graph autoencoder. The reason for doing so has been explained in our response to **W6**. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. Most of my concerns have been resolved. I am increasing the rating to 5. --- Reply to Comment 1.1.1: Comment: Thank you for considering a higher rating. We believe the constructive feedback will help to further improve the quality of our paper. Best regards.
Summary: The authors target the problem of multi-meaning words across different tasks for topic models, particularly under low-resource settings. To this end, they propose a variational graph autoencoder with a trainable Gaussian mixture prior to capture the distribution of task-specific word embeddings. Strengths: Overall, the paper is sound with several strengths: - The authors address the low-resource regime that has received little attention from the topic modeling research recently. - The experiments are encyclopedically covered with detailed discussions. Weaknesses: However, the paper exhibits some weaknesses: - The examples illustrating the applications of few-shot neural topic models are not persuasive. In particular, few-shot topic modeling might not be employed to learn users’ past purchases or online behaviors. In addition, during crises, e.g. Covid-19, the number of documents towards certain topics would rather burgeon than remain limited, hence invalidating the need for few-shot topic models. - The paper needs more comparison with prior embedding-based topic models [1, 2]. Additional evaluation of the model performance against such works could make the experiments more convincing. - The choice of prior distribution plays an important role in specifying the latent space. In the paper, the authors have not elaborated on why the Gaussian mixture model is selected as the prior, or provided experiments to demonstrate its advantage over other prior choices. [1] Neural models for documents with metadata (Card et al., 2018) [2] Neural topic models via optimal transport (Zhao et al., 2021) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could you supply more performance comparison of the proposed method with the previous one, especially those utilizing word embeddings for topic modeling? - Could you in more detail discuss the significance of the Gaussian mixture model as the prior distribution or clarify its benefits over other prior distributions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful readings and valuable comments. We believe the constructive feedback will improve the paper and increase its potential impact on the community. Regarding the weaknesses you mentioned, we respond as follows. **W1:** The examples illustrating the applications of few-shot neural topic models are not persuasive. In particular, few-shot topic modeling might not be employed to learn users’ past purchases or online behaviors. In addition, during crises, e.g. Covid-19, the number of documents towards certain topics would rather burgeon than remain limited, hence invalidating the need for few-shot topic models. **R1:** We are sorry that you think the examples illustrating the applications of few-shot neural topic models are not persuasive enough. But the few-shot topic models do have their own value in real-world applications. Take the example you listed. To this day, there has indeed been a substantial increase in the number of documents on the topic of "Covid-19". However, at the very beginning of this epidemic, there were relatively few cases of infection in all regions, and the reports of related medical diagnoses were very limited. Under such circumstances, few-shot topic models can help us extract key information from limited resources, thus facilitating us to take appropriate preventive and control measures. **W2:** The paper needs more comparison with prior embedding-based topic models [1, 2]. Additional evaluation of the model performance against such works could make the experiments more convincing. **R2:** Based on your suggestion, we compared our model with SCHOLAR [1] and NSTM [2], two prior embedding-based topic models on all four datasets with document size 10. The results are listed in the following. |20NG|PPL|TD|TC| |:--|:--:|:--:|:--:| |ETM [a]|3107|0.8395|-0.8437| |Meta-SawETM [b]|2984|0.5643|-0.6086| |SCHOLAR [1]|4371|0.5779|-0.6792| |NSTM [2]|3190|0.7008|-0.6202| |Meta-CETM|1170|0.8154|-0.3701| |Yahoo|PPL|TD|TC| |:--|:--:|:--:|:--:| |ETM [a]|2817|0.8851|-0.8913| |Meta-SawETM [b]|2365|0.5465|-0.6406| |SCHOLAR [1]|4697|0.4981|-0.7429| |NSTM [2]|3153|0.7505|-0.6351| |Meta-CETM|1219|0.7886|-0.4639| |DB14|PPL|TD|TC| |:--|:--:|:--:|:--:| |ETM [a]|3054|0.8106|-0.8719| |Meta-SawETM [b]|1914|0.7545|-0.9204| |SCHOLAR [1]|4913|0.5112|-0.8217| |NSTM [2]|3379|0.6195|-0.7626| |Meta-CETM|1084|0.7475|-0.4783| |WOS|PPL|TD|TC| |:--|:--:|:--:|:--:| |ETM [a]|3310|0.9286|-0.9785| |Meta-SawETM [b]|2253|0.7217|-0.7475| |SCHOLAR [1]|3884|0.6413|-0.7434| |NSTM [2]|3164|0.7659|--0.6472| |Meta-CETM|1293|0.8667|-0.5177| **W3:** The choice of prior distribution plays an important role in specifying the latent space. In the paper, the authors have not elaborated on why the Gaussian mixture model is selected as the prior, or provided experiments to demonstrate its advantage over other prior choices. **R3:** To explain clearly the motivation for using a GMM prior, we address two key questions. **1)** why do we use a variational graph autoencoder instead of a graph autoencoder? **2)** why do we adopt a Gaussian mixture prior instead of a standard normal prior? Actually, all of these choices serve one purpose, *i.e.,* to alleviate the issue of "*topic collapsing*". We found that if the learning of topic embeddings is not constrained, it will lead to highly repetitive topics (the content of all topics tends to be the same). Therefore, we use a variational GAE in the hope of regularizing the word latent space. Then we use Gaussian embeddings to represent topics to model the uncertainty, with a standard normal distribution as the prior. Even so, we found the learned topics are not diverse enough; thus, we rely on the GMM prior to further overcome the problem. **Please refer to our newly added one-page PDF for corresponding visualization and numerical results.** [1] Neural models for documents with metadata (Card et al., 2018) [2] Neural topic models via optimal transport (Zhao et al., 2021) [a] Topic modeling in embedding spaces (Dieng et al., 2020) [b] Bayesian deep embedding topic meta-learner (Duan et al., 2022) --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you the authors for their response. After reading all reviews and rebuttals I found that some of my concerns have been resolved to some extent. However, because the examples are still equivocal to me, I decide to keep my scoring. --- Reply to Comment 1.1.1: Comment: Thank you for your further reply. As you mentioned that some of your concerns have been resolved, we would like to know which ones you still have that were not adequately addressed. Also, please let us know which examples are still equivocal to you. Do you mean the examples used to illustrate the applications of few-shot neural topic models?
Summary: The authors present a new neural topic model which aims to solve the problem of learning task-specific word embeddings in a low resource scenario. In addition to a somewhat typical neural TM, dependency graphs are collected using parsers, and embedded via GCNs to produce adaptive word embeddings. A topic-word matrix then models the task-specific distribution over words as a gaussian over their adaptive embeddings. The topic-specific embedding spaces capture the role a word is likely to play within that context. The model is evaluated across 4 datasets and a number of metrics, where it shows strong improvements in perplexity, and good performance in document classification. Strengths: - strong empirical performance in perplexity, and good performance in document classification. - comparisons to many existing similar models. The improvements over models using sentence BERT are especially interesting. Good ablations. Weaknesses: - reliance on pre-trained parsing tools. How well does this approach work in different languages, or styles/domains of text that differ significantly from the parsing training data? - the problem at the heart of this work is lexical ambiguity. Here parsing, and then graph-based embedding of the parse, aim to find task-specific meanings of each word, and there is little doubt that adaptive word embeddings are an effective approach to this problem. Outside of topic modeling, LLMs and other transformer-based models solve this same task, and also refine word-specific embeddings into context-specific ones. The only aspect in which this is included is via CombinedTM/ZeroShotTM. Admittedly, I haven't worked on topic modeling since the switch to neural models, but it would seem that BERT would also solve the issues being pursued in this work. I didn't see a compelling explanation for why this approach based on dependency graphs is an improvement over generic transformer-based sentence embeddings, all other aspects remaining the same. - even if the work is geared towards low resource settings, I would like to see the performance as a factor of task dataset size. In its current presentation I think the performance is compelling enough to warrant acceptance, as the performance improvement seems sufficient across a number of metrics to be of great interest to the topic modeling community. However, if those performance margins decrease significantly (or completely) as dataset size increases to even modest sizes, it would be difficult to find a use-case for this approach. It would be the story of perhaps offloading some of the learning problem to the pre-trained parsing and priors over their embeddings, which solves a practical problem, but not so much one of academic interest. - discussion of conceptually related work is sparse. The related works section is overly brief given the amount of work that solving similar core tasks. One earlier work that seems spiritually related might be [1]. - surprisingly CNN > MLP is a more important design decision than any architectural change represented across several previous papers. It raises a little bit of an alarm bell regarding the importance of these architectural designs as evaluated here. Extending the document classification task to more datasets (than the 2 here) would increase confidence in the importance of the work. L26: vontextualized [1] Word Representations via Gaussian Embedding https://arxiv.org/abs/1412.6623 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (mixed with weaknesses) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your careful consideration and valuable comments. In the following, we are going to try our best to address your concerns. **W1:** reliance on pre-trained parsing tools. How well does this approach work in different languages, or styles/domains of text that differ significantly from the parsing training data? **R1:** Thank you very much for raising a very meaningful question. However, we think it is outside the scope of this paper, and we will keep it as a focus of our future work. **W2:** The problem at the heart of this work is lexical ambiguity ... I didn't see a compelling explanation for why this approach based on dependency graphs is an improvement over generic transformer-based sentence embeddings, all other aspects remaining the same. **R2:** On the one hand, the remarkable capabilities of pre-trained language models such as BERT rely on the massive training corpus (e.g., BERT was trained on a dataset of over 3.3 billion words). In contrast, we present a data-efficient framework that only relies on a batch of training tasks (about 5 million words for 20NG) to produce well-fitted word embeddings for each task. On the other hand, due to the strong biases introduced by the training corpus (even though it may cover a wide range of content), pre-trained language models may not be so effective in adapting to a completely unfamiliar context. Whereas our proposed framework has been carefully designed to learn how to effectively adapt to novel tasks from an unfamiliar corpus, it should perform better as a few-shot learner. **W3:** Even if the work is geared towards low resource settings, I would like to see the performance as a factor of task dataset size ... However, if those performance margins decrease significantly (or completely) as dataset size increases to even modest sizes, it would be difficult to find a use-case for this approach. It would be the story of perhaps offloading some of the learning problems to the pre-trained parsing and priors over their embeddings, which solves a practical problem, but not so much one of academic interest. **R3:** We have conducted additional experiments by varying the number of documents in each task from 20 to 50 to 100. Please refer to our unified response to all reviewers (**Author Rebuttal by Authors**) to see the corresponding results. **W4:** Discussion of conceptually related work is sparse. The related works section is overly brief given the amount of work that solving similar core tasks. One earlier work that seems spiritually related might be [1]. **R4:** We agree that there has been sparse discussion of conceptually related work, and we will add more abundant discussions about the corresponding works in the revision, including the one [1] you mentioned. **W5:** surprisingly CNN > MLP is a more important design decision than any architectural change represented across several previous papers. It raises a little bit of an alarm bell regarding the importance of these architectural designs as evaluated here. Extending the document classification task to more datasets (than the 2 here) would increase confidence in the importance of the work. **R5:** Yeah, we are also surprised by the experimental results showing that CNN > MLP is a more important design decision in our setup, perhaps because our input data is the bag-of-words (BoWs) representations of the documents. Anyway, to increase the credibility of this finding, we conduct document classification experiments on additional datasets, *i.e.,* Yahoo and WOS. The results are listed in the following table. ||Yahoo|||WOS|| |:--|:--:|:--:|:--:|:--:|:--:| |**Methods**|**5way-5shot**|**5way-10shot**||**5way-5shot**|**5way-10shot** |MAML (MLP)|45.42|51.00||37.77|40.43| |PROTO (MLP)|50.01|56.16||39.61|41.46| |FT (MLP)|48.59|53.06||36.52|37.22| |FT$^{*}$ (MLP)|50.73|56.74||45.02|51.20| |MAML (CNN)|48.81|56.50||47.28|57.32| |PROTO (CNN)|53.16|63.66||59.05|**67.75**| |FT (CNN)|56.78|66.04||54.68|63.39| |FT$^{*}$ (CNN)|53.28|52.56||51.42|61.98| |HNS-SawETM|52.35|57.86||42.09|56.91| |Meta-SawETM|52.45|60.58||43.39|57.44| |CombinedTM|57.94|64.75||56.16|65.97| |ZeroShotTM|58.12|66.21||58.50|66.10| |Meta-CETM|**63.84**|**72.67**||**61.47**|67.62| [1] Word Representations via Gaussian Embedding https://arxiv.org/abs/1412.6623 --- Rebuttal Comment 1.1: Comment: Dear **Reviewer biyw**, Thanks for your patience and careful review! We have endeavored to address your concerns at the first rebuttal stage, and hope that those responses and empirical results will further convince you of the significance of our work. Considering that the discussion period will end on **Aug 21st**, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you during the remaining time. Best regards.
Rebuttal 1: Rebuttal: We really appreciate all the reviewers for their constructive and helpful comments. And we apologize for typos, grammar mistakes, unclear notations and missing citations. They will be corrected such that the overall writing meet NeurIPS standards. Here we briefly introduce our newly added rebuttal PDF and also provide additional experimental results. ------ 1. In our one-page rebuttal PDF, we exhibit the results of: - visualizations of adapted embedding space with different prior choices for topic-word matrix $\boldsymbol{\beta}$, including **no prior**, **standard Gaussian distribution prior** and **GMM prior**, as suggested by reviewer **4MPJ** and reviewer **VS4z**. - ablation study on all four datasets (20NG, DB14, Yahoo and WOS) of separately removing our module designs (context variable, Graph VAE and GMM prior) in Table.1, as suggested by reviewer **VS4z**. The ETM is chosen as our baseline. For removing Graph VAE but keeping the GMM prior, we replace the **Gragh VAE** with a **Graph auto-encoder** to model the task-specific dependency graph. For removing GMM prior but maintaining the Graph VAE module, we apply the **standard Gaussian distribution prior** to replace the **GMM prior**. ------ 2. Moreover, as mentioned by reviewer **biyw** and reviewer **CvqN**, we vary the number of documents in each task from \{5, 10\} to \{20, 50, 100\} and provide the perplexity (PPL) results on all four datasets as the following. |||20NG||||DB14|| |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| |**Methods**|**20**|**50**|**100**||**20**|**50**|**100**| |LDA|2979|2443|2118||3095|2353|1858| |PFA|2439|2271|**2060**||1903|1887|1637| |ProdLDA|4807|4489|4466||5819|5794|6016| |ETM|3276|3215|3199||2870|2834|2837| |MAML-ProdLDA$^{*}$|4378|4372|4359||4612|4463|4381| |MAML-ETM$^{*}$|3287|3186|3172||2819|2778|2715| |Meta-SawETM|2657|3761|3661||2355|2577|2984| |CombinedTM|2331|2267|2205||1863|1765|1695| |ZeroShotTM|2673|2397|2330||1722|1638|1604| |Meta-CETM(ours)|**1216**|**1517**|2138| |**1109**|**1306**|**1468**| |||Yahoo||||WOS|| |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| |**Methods**|**20**|**50**|**100**||**20**|**50**|**100**| |LDA|3916|3279|2833||2370|2091|1896| |PFA|2545|2326|2169||1675|1663|**1643**| |ProdLDA|6093|5784|5736||4617|4386|4369| |ETM|2781|2801|2811||3189|3176|3296| |MAML-ProdLDA$^{*}$|4033|3951|4202||3908|3863|3845| |MAML-ETM$^{*}$|3439|3315|3256||4189|4062|3947| |Meta-SawETM|2859|3037|3251||3620|3365|3101 |CombinedTM|2543|2481|2286||2587|2473|2330| |ZeroShotTM|2664|2496|2319||2660|2497|2372| |Meta-CETM(ours)|**1369**|**1440**|**1743**||**1482**|**1576**|1786| Pdf: /pdf/ad7716d96526e71708c913244f79845266a976bd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the problem of inducing neural topic models in low-resource regimes by learning adaptive word embeddings that exploit contextual grammar information. The adaptive word embeddings are learnt with a variational graph autoencoder and the topics are formed from a gaussian mixture prior of the latent word embedding space that generates the observed words and their semantic relations. The paper introduces a variational inference algorithm to estimate the document- and topic-specific latent variables using MLP and MLP with attention, respectively. In addition, GCN is employed to learn the latent word representations given initial word embeddings and word relations derived from a neural dependency parser. Finally, the parameters of the GMM are learned independently via EM. Evaluation shows strong empirical results in few-shot topic modeling and document classification. Strengths: The main contribution of the paper is the introduction of contextual semantic information into the neural topic model. Empirical results suggest that this approach is beneficial both in terms of per-holdout-word perplexity and topic coherence while remaining competitive in terms of topic diversity. Strong results are also observed in few-shot classification. The paper is fairly well-written but the baselines could have been better explained either in section 3.1 or the related work. Weaknesses: I cant parse eq. 10 in the evalution: what is the superscript (1)s? In the experiments in section 3.2.3 (few-shot classification), I could not follow the evaluation scheme described in lines 238-239. How were these experiments conducted? How were the class-specific topic-word matrices calculated, what data was used for this, and how was the reconstruction error measured? Also, I could not find the results from this analysis. The results in Table 2 correspond to datasets that have ground truth topic labels. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No limitations were discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the quality of our work! We will take your suggestions to give a more detailed explanation of the baselines in the revision. Here we clarify some of your questions. **Q1:** I can't parse eq. 10 in the evaluation: what is the superscript (1)s? **A1:** Eq. 10 is the formula for calculating the per-holdout-word perplexity, where $\phi \in \mathbb{R}^{V \times K}$ denotes the topic-word matrix and $\theta \in \mathbb{R}^{K \times N}$ represents the topic proportion matrix, and the superscript $s$ indicates the index of the $i^{th}$ collected sample. Here we use the notation $\phi^{(1)s}\theta^{(1)s}$ for the consideration of hierarchical topic models, which have multiple layers of document representations. By using the superscript (1), we mean that we use the representation of the bottom layer to compute the perplexity. Indeed, for most regular topic models with only a single layer of document representation, we can omit the superscript (1) and write the notation as $\phi^{s}\theta^{s}$. **Q2:** In the experiments in section 3.2.3 (few-shot classification), I could not follow the evaluation scheme described in lines 238-239. How were these experiments conducted? How were the class-specific topic-word matrices calculated, what data was used for this, and how was the reconstruction error measured? Also, I could not find the results from this analysis. The results in Table 2 correspond to datasets that have ground truth topic labels. **A2:** Above all, the few-shot classification experiment requires available ground truth topic labels, which are used to compute the accuracy. Next, we elaborate on the evaluation scheme described in lines 238-239 with an example of a 5-way 5-shot classification task. Specifically, the data of a 5-way 5-shot task consists of a "*support set*", which includes 25 documents from 5 different topics (5 for each topic), and a "*query set*", which contains 15 documents from each of the 5 topics. Our goal is to train a task-specific classifier using the "*support set*" and compute the classification accuracy on the "*query set*". So for the well-trained topic models, we use five documents of each topic in the "*support set*" to adapt a topic-word matrix, respectively; the resulting five topic-word matrices, denoted as {$\phi_1$, $\phi_2$, $\phi_3$, $\phi_4$, $\phi_5$}, are called class-specific topic-word matrices (here a topic refers to a category). Then for each document in the "*query set*", we use the trained topic model to derive its topic proportion $\theta_q$, which is subsequently combined with each of the five class-specific topic-word matrices to calculate the data likelihood $\\{ p(x_q | \theta_q, \phi_i) \\}_{i=1}^5$, $x_q$ is the BoW of the query document. The reconstruction error is defined as the negative data likelihood, so we classify it as the topic with the smallest reconstruction error. --- Rebuttal Comment 1.1: Comment: Dear **Reviewer fyGx**, Thanks for your patience and careful review! Not sure if the responses we offered in the first rebuttal stage adequately addressed your concerns? Considering that the discussion period will end on **Aug 21st**, please let us know if you have any other questions about our paper, and we will be happy to discuss them with you during the remaining time. Best regards.
Summary: This paper proposes a method for few-shot topic modeling. Specifically, rather than following traditional wisdoms to learn static word embeddings for all the tasks/domains, the authors allow task-specific word representations such that the knowledge from the source task can be better transferred to a target task. The authors also employ Gaussian mixture prior with EM algorithm to capture the clustering structure of distributed word representations. Experimental results demonstrate the effectiveness of the proposed method. Strengths: 1. The proposed method for few-shot topic modeling looks novel and intuitive to me. 2. The experiment looks inclusive and the proposed method achieved leading performance. Weaknesses: 1. Though the paper is generally well-written, some parts are confusing to me. Please refer to the questions part. 2. [1] also follows a clustering perspective for topic modeling and the difference between [1] and the proposed method can be discussed. [1] Effective Neural Topic Modeling with Embedding Clustering Regularization Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In Eq 1, is it $Z^{(i)\intercal}Z^{(i)}$ or $Z^{(i)}Z^{(i)\intercal}$? What is the relationship between $\hat{A}$ and $A$? 2. In the introduction line 49, the authors claim that "task-specific semantic graphs between words using well-established dependency parsing tools". However, I fail to find any information about the semantic graph using parsing tools in the rest of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments and feedback. The weaknesses have been addressed below. **Q1:** In Eq 1, is it ${Z^{(i)}}^{\top}Z^{(i)}$ or $Z^{(i)}{Z^{(i)}}^{\top}$? What is the relationship between $\hat{A}$ and $A$? **A1:** Since we assume $Z^{(i)} \in \mathbb{R}^{D \times V}$ in the article, it should be ${Z^{(i)}}^{\top}Z^{(i)}$ in Eq. 1, which corresponds to the generated adjacency matrix $A^{(i)} \in \mathbb{R}^{V \times V}$. Thank you for pointing out our typo here. In addition, we use a variational graph autoencoder to model the adjacency matrix $A$ of the semantic graph, following the standard encoder-decoder architecture shown in Fig. 2, $\hat{A}$ can be viewed as the reconstruction of $A$, *i.e.*, $A$ is the ground-truth adjacency matrix and $\hat{A}$ represents the predicted adjacency matrix. **Q2:** In the introduction line 49, the authors claim that "task-specific semantic graphs between words using well-established dependency parsing tools". However, I fail to find any information about the semantic graph using parsing tools in the rest of the paper. **A2:** We apologize for not covering this part of the information, and we will add the corresponding details in the revision. Actually, we build task-specific semantic graphs with the help of **spaCy**, a library for advanced natural language processing that provides a variety of linguistic annotations to give us insights into texts' grammatical structure. Concretely, for a specific task, we use the built-in syntactic dependency parser to analyze each document, and the resulting dependency labels describe the relations between individual tokens, like a subject or object, which also become the basis for constructing the task-specific semantic graph. For example, if a dependency label is assigned between two vocabulary terms in any document, we add an edge between the corresponding nodes in the semantic graph. Conversely, if two vocabulary terms are not assigned a dependency label in all documents, then there are no edges between the corresponding nodes in the graph. Fig. 1 of our Supplementary Materials illustrates the constructed task-specific semantic graphs. **W2:** [1] also follows a clustering perspective for topic modeling and the difference between [1] and the proposed method can be discussed. **Discussion:** While both [1] and our method follow a clustering perspective to learn topics, the focus and target problems they aimed at are different. On the one hand, the starting point of [1] is the phenomenon of "*topic collapsing*", a common issue that plagues most existing topic models. And its solution is to regularize topic embeddings as cluster centers and word embeddings as cluster samples on the ground of ETM [2]. It assigns all words properly to each topic by solving a well-defined optimal transportation problem. On the other hand, our method strives to solve the problem of learning topics effectively from only a few documents. And the starting point is to learn adaptive word embeddings whose semantics can be well adapted to the given task by using extra contextual grammar information. Therefore, we posit a graph autoencoder to model the semantic dependency graph. To avoid learning repetitive topics and ensure learned topic distributions cover as many significant words as possible for the given task, we impose a GMM prior on the word latent space such that the adaptive word embeddings can be reasonably encapsulated by the topic distributions. [1] Effective Neural Topic Modeling with Embedding Clustering Regularization [2] Topic Modeling in Embedding Spaces --- Rebuttal Comment 1.1: Comment: Thank the author for the response. After reading all the review comments and rebuttals I decide to keep my rating unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your time and valuable comments. Best regards.
null
null
null
null
Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning
Accept (poster)
Summary: This paper focuses on the problem of sequential decision-making within a hierarchical framework, where tasks exhibit a hierarchical decomposition structure, and the agent does not possess any prior knowledge of the task dependency graph. In contrast to prior hierarchical approaches that directly model dependencies and utilize two-level policies for task resolution, it investigates an even more demanding scenario. Specifically, the agent lacks information about the unlocked achievements within each episode. To tackle this challenge, the paper employs PPO as its backbone RL algorithm. Interestingly, it is found that the representations learned by PPO possess some ability to predict the next locked achievement, albeit with limited confidence. To further enhance this predictive capability, the paper proposes two contrastive loss mechanisms as representation learning objectives. These two mechanisms help the representation to be able to predict the agent’s next unlocked achievement (intra-trajectory contrastive loss) and also learn a representation of the achievement so that it is not capturing any environment-specific or spurious features (cross-trajectory matching). Strengths: 1. The author proposes a novel self-supervised loss within the context of hierarchical decision-making, which could be easily integrated into the existing RL algorithm (PPO). 2. Good empirical performance boost compared with the previous strongest model-based RL algorithms (Dreamer-v3) on the Crafter environment. Weaknesses: My concerns is mostly around how relaxing the assumption of the agent's knowledge of its unlocked achievement during an episode. In particular, whether it is realistic in real applications, and how much does this affect the overall performance (in other words comparison to a baseline method that does assume such prior knowledge). Please see my detailed comments in the Questions section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Here are my questions about the paper. Given the limited amount of time for rebuttal, I understand that it may not be possible to address all of the points with comprehensive additional experiments. But I would be happy to raise my score if my following concerns are adequately addressed. 1. **Assumptions of the problem setting**: The paper introduces an additional level of difficulty by assuming that the agent does not possess knowledge of the specific achievements unlocked throughout each episode. This assumption diverges from previous model-based and hierarchical approaches, which rely on such information. It would be helpful if the authors could provide a real-world example or scenario where relaxing this assumption becomes necessary. This would aid in understanding the practical applicability of such an assumption. From my perspective, this assumption appears weak, as it is difficult to envision an agent successfully completing an episode without awareness of the achievements attained during the process. 2. **Backbone RL algorithm**: The paper extensively discusses the effectiveness of the Proximal Policy Optimization (PPO) algorithm as the backbone for addressing tasks with hierarchical structures. The authors demonstrate how the representations learned by the PPO agent exhibit predictive abilities for the next achievement, as outlined in sections 3.1 and 3.2. However, it would be valuable to clarify whether these findings are specific to policy gradient methods or if they can be extended to value-based approaches as well. Understanding if the proposed contrastive loss mechanisms can be applied to value-based reinforcement learning algorithms would contribute to a more comprehensive evaluation of their potential. 3. **Baseline Comparison**: It would be insightful to compare the performance of the two proposed contrastive losses with an "oracle" objective, where the agent does possess prior knowledge of its unlocked achievements. For instance, for intra-trajectory achievement prediction, a simple 22-way classification could be employed as the loss, predicting the next achievement based on the current state-action pair. Similarly, for the cross-trajectory objective, minimizing the distance between representations of the same achievements could be considered. Such a comparison would facilitate a better understanding of the impact of relaxing the assumption regarding the agent's knowledge of achievements on the overall algorithm performance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We appreciate the encouraging comments (“the author proposes a novel self-supervised loss within the context of hierarchical decision-making”, “good empirical performance boost compared with the previous strongest model-based RL algorithms”). We would like to address the questions and concerns of the reviewer, as presented below. --- **Q1: Assumption of the problem setting appears weak.** We wish to emphasize that our work focuses on developing an agent capable of discovering reusable skills and composing them to solve complex tasks in open-ended environments, as stated in the first paragraph of Section 1 in our main paper. We agree with your view that, for completing specific tasks with pre-defined subtask dependencies (e.g., making pasta with a detailed recipe), leveraging these dependency structures (e.g., providing the entire recipe) and incorporating subtask completion signals (e.g., acknowledging the completion of making the sauce) would be beneficial for training an agent [1, 2]. However, in open-ended environments where the final goal is not clearly defined and an agent faces the challenge of solving a myriad of tasks, pre-defining subtask dependencies for each task could be impractical. In such situations, an agent must explore environments, build a repertoire of skills, and combine these skills to solve tasks in an autonomous way. One realistic example is the game of Minecraft, where players are not provided with explicit instructions for survival, such as building shelters or crafting weapons. Instead, they must find their own survival strategies through exploration. Due to the unlimited number of potential survival strategies, defining specific subtask dependencies or providing subtask completion signals becomes infeasible. In this context, it is reasonable to assume that an agent receives a reward if they discover a new survival strategy. This approach aligns neatly with our reward assumption where an agent is granted a reward of 1 when unlocking a new achievement. It is important to note that the objective of Crafter is not solely about collecting diamonds. There is no pre-defined end goal and an agent is rewarded as it continually discovers and develops new survival skills. Thank you for raising this particular concern regarding the assumption. We appreciate your feedback and recognize the need to elucidate realistic scenarios where our assumption holds. We will revise the introduction section to provide a more in-depth explanation of the practical applicability of our assumption. Additionally, we will revisit lines 39 to 41 of the main paper and rewrite them to better articulate the potential limitations of the hierarchical approach in an open-ended world. **Q2: Application of contrastive learning to value-based methods.** Thank you for bringing this to our attention. We evaluate our contrastive learning method alongside a popular off-policy value-based algorithm QR-DQN and observe its strong performance. For more details on the experimental settings and results, please refer to General Response Q2. **Q3: It would be insightful to compare the performance of the two proposed contrastive losses with an "oracle" objective.** Thank you for your valuable suggestion. We train PPO agents with the oracle objectives on Crafter for 1M environment steps and compare their performance with our proposed contrastive objectives. Following your recommendation, we substitute the intra-trajectory contrastive prediction with a 23-way classification (22 achievements and 1 no achievement) using oracle labels. Additionally, we replace the cross-trajectory Wasserstein matching with the exact matching using oracle labels. To more precisely assess the impact of each objective, we choose not to employ the memory component, which is not necessarily required for implementing the intra- and cross-trajectory objectives. Tables 1 and 2 of the attached file present the performance of the oracle intra- and cross-trajectory objectives, respectively. Notably, the oracle cross-trajectory matching outperforms our approach. When it comes to the intra-trajectory prediction, however, there is no significant difference between the oracle and our method. It is important to highlight that the achievement label itself does not contain detailed information about the achievement. For example, the label indicating crafting a stone pickaxe omits essential information regarding the agent's requirement for wood and stone, as well as proximity to a crafting table. We hypothesize that predicting the next achievement in the latent space of the encoder, whose representations may encompass richer information about object location and inventory states, provides additional benefit over predicting achievement labels. --- Thank you again for your constructive comments, which help us to improve our paper’s quality. We hope that our answers address all the reviewer's points. **References** [1] Sungryull Sohn et al., Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies, NeurIPS 2018. \ [2] Robby Costales et al., Possibility Before Utility: Learning And Using Hierarchical Affordances, ICLR 2022. --- Rebuttal Comment 1.1: Title: Thank you! Comment: I appreciate the additional experiments conducted by the authors, and they have adequately addressed my concerns. I would love to see the work accepted and shared with the community at large.
Summary: This paper introduces achievement distillation, a representation learning method that is combined with PPO to obtain state-of-the-art results on the 2D crafter benchmark. First, the authors demonstrate that with simple hyper-parameter tweaks, the performance of vanilla PPO can be greatly improved. Next, detail the three components of achievement distillation which leverage achievement labels $g$. Intra-trajectory achievement prediction uses a contrastive objective to maximize the similarity between state action pairs and the next achievement. Cross-trajectory achievement matching maximizes the similarity between state-action pairs of the same achievement across episodes using optimal transport. Finally, they use the achievement representations as memory by concatenting the last achievement representation to the policy and value inputs. This results in a method that achieves high performance on the Crafter benchmark, especially for difficult to reach achievements. Strengths: The paper was generally easy to follow. The experiments improving the performance of vanilla PPO were exciting! It’s great to see better tuned baselines. The improvements in PPO also naturally flowed into the introduction of achievement distillation. Achievement distillation is an interesting form of representation learning, and to my knowledge, is novel. The authors conducted experiments only on the Crafter environment. The results in this benchmark are compelling as it is extremely challenging, and as the authors highlight, achievement distillation can perform better on very hard to achieve tasks. The authors additionally include ablation studies by subtracting components of their method. I found the ideas in the paper to be easy to follow and straightforward in a good way. Weaknesses: The approach appears slightly overfit to the chosen Crafter benchmark, and the authors do not test achievement distillation in any other settings. Though the results on Crafter are compelling, it would be great if the authors could provide more examples and discussion of when this could be applied to other environments, and if so, how. The comparison to baselines is a bit mis-leading, as other approaches, like DreamerV3, do not make use of the achievement labels necessary for achievement distillation. That being said, even PPO outperforms these baselines! I also found the log-scale on the axes of Figure 6 to be a bit confusing – is a success rate < 0.01% really significant? I generally recommend acceptance of this work, primary drawbacks being its limited applicability to broader settings and evaluation on only one environment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major: Since crafter is partially observed, do you use a recurrent network? Minor: Line 141: outperforms dreamer: for the same number of environment steps? I would expected model-based to have higher sample efficiency, though it may have lower asymptotic performance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is heavily designed for the crafter benchmark. While this is impressive, it would be good if the authors could include a discussion of where achievement distillation would be useful beyond just this setting. Where is it inapplicable? What are the challenges in implementing this? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We are encouraged by the reviewer’s positive comments (“the experiments improving the performance of vanilla PPO were exciting”, “the ideas in the paper are easy to follow and straightforward in a good way”). We would like to address the questions raised by the reviewer, as presented below. --- **Q1: More benchmarks. It would be great if the authors could provide more examples and discussion of when this could be applied to other environments.** Thank you for your suggestion. We conduct experiments on two additional benchmarks that feature hierarchical achievements: Procgen Heist and a custom MiniGrid environment. Please refer to the Global Response Q1 for more details on the benchmarks and our experimental results. **Q2: DreamerV3 does not make use of the achievement labels.** We first clarify that our method operates under the assumption that an agent has information only regarding when a new achievement is unlocked and does not utilize achievement labels, as detailed in Section 2.1 of our main paper. Additionally, this information can be easily retrieved from the reward signal, specifically at timesteps when rewards are 1. The sole distinction from DreamerV3 is our assumption that there exists an underlying hierarchical structure of achievements. We will explain our assumption and the difference between other baselines more clearly in the revised version. **Q3: I also found the log-scale on the axes of Figure 6 to be a bit confusing. Is a success rate < 0.01% really significant?** Thank you for your helpful comment. We report the individual success rates of achievements using a log-scale, following the practice in DreamerV3 [1]. A success rate of an achievement under 0.01% implies that this is unlocked only in a subset of individual runs, also mentioned in the second paragraph in Section 5.2 of our main paper. Nevertheless, collecting a diamond in an individual run remains noteworthy, as it is particularly challenging due to the scarcity of resources. We will explain this more clearly in the revised version. **Q4: Since crafter is partially observed, do you use a recurrent network?** In this paper, we do not employ recurrent neural networks such as LSTM and only utilize convolutional neural networks for image processing. LSTM has been widely utilized in RL to address partial observability [2, 3]. However, it has been noted that employing LSTM for PPO on Crafter does not yield a significant performance improvement, resulting in a score increase of only 0.1 [4]. Incorporating our contrastive learning with memory-based policies would be a promising avenue for future research. **Q5: Does PPO outperform DreamerV3 for the same number of environment steps? I would expect model-based to have higher sample efficiency, though it may have lower asymptotic performance.** We evaluate PPO and DreamerV3 with the same number of 1M environment steps and find that PPO outperforms DreamerV3. I agree with your opinion that model-based algorithms generally exhibit greater sample efficiency than model-free algorithms, provided the learned models are accurate. However, in our attempt to reproduce the results of DreamerV3, we notice that the losses for training a world model (Equation 5 in the original paper) tend to increase as training progresses, leading to imprecise world models. Furthermore, we find a rapid drop in the policy entropy, reaching a level of 0.5 at timestep 50K and remaining low thereafter. This phenomenon hinders sufficient exploration for unlocking new achievements. --- We again thank the reviewer for providing constructive feedback, which truly enhances the quality of our paper. We hope that our response adequately addresses all the reviewer's questions. --- **References** [1] Danijar Hafner et al., Mastering Diverse Domains through World Models, arXiv 2023. \ [2] Matthew Hausknecht and Peter Stone, Deep Recurrent Q-Learning for Partially Observable MDPs, arXiv 2015. \ [3] Steven Kapturowski et al., Recurrent Experience Replay in Distributed Reinforcement Learning, ICLR 2019. \ [4] Aleksandar Stanic et al., Learning to Generalize with Object-centric Agents in the Open World Survival Game Crafter, ToG 2023. --- Rebuttal Comment 1.1: Title: Thank you for your work! Comment: I would like to thank the authors for continuing to improve their work. I find the new experiments across domains and with value-based algorithms convincing. I am consequently improving my score. I additionally think it would be of great value to the community if the authors could detail how they exactly implement the necessary achievement counting component of their method for new domains.
Summary: This work introduces achievement distillation, a model-free RL method designed to discover achievements without the need for explicit long-term planning components. The proposed method comprises three primary components: two self-supervised tasks and a memory component. The self-supervised tasks, namely Intra-trajectory achievement prediction and cross-trajectory achievement matching, guide the encoder to predict the next achievement to be unlocked using a contrastive learning loss function with different objectives. The memory component is formed by concatenating the latent state representation from the encoder with the action and the representation of the previous achievement. These components are integrated into the PPO algorithm through two alternating training phases involving policy training and auxiliary self-supervised tasks. The effectiveness of achievement distillation is evaluated in the Crafter environment, where it demonstrates significant performance improvements over strong baselines. Additionally, the authors conducted analyses on model sizes, representations, and the contribution of individual components. Strengths: 1. The primary strength of the paper lies in the significant results it presents. The main results in Table 1 and Figure 5 provide clear evidence that the proposed achievement distillation method outperforms the baseline methods to a substantial degree. Furthermore, the results in Figure 6 demonstrate that achievement distillation achieves success in several achievements that none of the baselines can accomplish (e.g., making an iron sword), which is impressive. 2. The authors have done a great job in implementing various relevant strong baselines. Reproducing these baselines are non-trivial given the complexity of them. 3. While the concept of self-supervised auxiliary tasks and the used self-supervised losses are not novel ideas, the authors have managed to execute them very well, resulting in an agent with remarkable performance. 4. The paper is well-written, and the ideas are effectively presented and justified. Weaknesses: 1. The primary weakness of this work lies in the limited range of environments in which the proposed method is evaluated. Since the authors only tested it on the Crafter environment, it remains uncertain whether their method would be effective in different settings and whether it has avoided overfitting to the Crafter environment. 2. The ablation study appears to be somewhat superficial. Although the contribution of cross-trajectory achievement matching is evident, the significance of cross-trajectory achieving matching and memory is not as convincing. The authors should provide additional evidence to support the role of these components. Please refer to the next section for suggestions on how to improve this aspect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would encourage the authors to address the main limitation of their work discussed above by conducting further experimentation in various environments, such as MiniHack and maybe ProcGen. 2. It would greatly enhance the understanding of the contribution of cross-trajectory achieving matching and memory if the authors could conduct additional research. For instance, including the individual success rates for all achievements (Figure 6) while ablating these components could be a valuable option. 3. It would be interesting if the authors could present the results in Table 1 and Figure 5 for a larger range of environment steps (e.g., 5M, 10M). This would clarify whether this method is only more data efficient or remains superior with more steps. 4. In Figure 6, why the Individual success rate is not shown for MuZero+SPR? 5. Do you think your method would work with RL algorithms other than PPO? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We appreciate the encouraging comments (“The authors have done a great job in implementing various relevant strong baselines”, “The paper is well-written, and the ideas are effectively presented and justified”). We would like to address your questions below. --- **Q1: I would encourage the author to conduct further experimentation in various environments, such as MiniHack and maybe ProcGen.** Thank you for your suggestion. We conduct experiments on two additional benchmarks that feature hierarchical achievements: Procgen Heist and a custom MiniGrid environment. Please refer to the Global Response Q1 for more details on the benchmarks and our experimental results. **Q2: It would greatly enhance the understanding of the contribution of cross-trajectory achieving matching and memory if the authors could conduct additional research.** Thank you for the suggestion. We first provide the individual success rates for challenging achievements, such as collecting iron, while conducting ablation studies on cross-trajectory matching (C) and memory (M). The table below demonstrates that both cross-trajectory achievement matching and memory play significant roles in unlocking these challenging achievements. |Achievement|PPO + I|PPO+I+C|PPO+I+C+M| |---|---|---|---| |Make stone pickaxe|10.92|16.43|22.93| |Make stone sword|14.32|20.08|23.35| |Collecting iron|1.33|2.70|4.02| |Make iron pickaxe|0.01|0.00|0.01| |Make iron sword|0.00|0.00|0.02| However, we find that there is no substantial difference in the success rates for easy achievements, such as crafting wooden tools, as shown below. |Achievement|PPO + I|PPO+I+C|PPO+I+C+M| |---|---|---|---| |Make wood pickaxe|71.42|71.44|72.69| |Make wood sword|67.16|68.68|70.86| Additionally, to gain deeper insight into how the cross-trajectory matching works, we conduct an oracle experiment suggested by Reviewer cpyF. In this experiment, we replace the cross-trajectory Wasserstein matching with exact matching using oracle achievement labels and compare its performance with our matching algorithm. Table 2 in the attached file presents the performance of oracle cross-trajectory matching. From this analysis, we anticipate a score increase of 1.8 when the cross-trajectory matching is optimized to its fullest potential. **Q3: It would be interesting if the authors could present the results in Table 1 and Figure 5 for a larger range of environment steps.** Thank you for your valuable recommendation. We increase the number of environment steps to 10M and evaluate the performance of our method against three baseline: PPO, LSTM-SPCNN, and SEA. It is worth noting that training a DreamerV3 agent with 10M environment steps was not feasible within the constrained time of the rebuttal period. With a single NVIDIA RTX 3090 GPU, it requires approximately 2.5 days to complete just 1M environment steps. As illustrated in Figure 6 of the attached file, our method not only demonstrates higher sample efficiency but also outperforms the baselines with superior final performance. **Q4: In Figure 6, why the Individual success rate is not shown for MuZero + SPR?** We first clarify that the results of MuZero + SPR in this paper have been derived from the original paper due to the unavailability of its source code. Additionally, the original paper only presents the individual success rate for a MuZero + SPR agent with pre-training using 150M exploratory data. Since our primary focus is on training an agent without pre-training, we have opted not to include MuZero + SPR in Figure 6. Afterwards, we plan to reproduce the results of MuZero + SPR and provide its individual success rate once the official source code becomes available. **Q5: Do you think your method would work with RL algorithms other than PPO?** Thank you for bringing this to our attention. We evaluate our contrastive learning method alongside a popular off-policy value-based algorithm QR-DQN and observe its strong performance. For more details on the experimental settings and results, please refer to General Response 2. --- Once again, we really appreciate the reviewer's insightful questions, which greatly help us to to enhance our paper. We hope that our response above addresses all of the reviewer's questions. --- Rebuttal Comment 1.1: Comment: I would like to express my gratitude to the authors for investing the time in conducting these additional experiments. These efforts serve to underscore the importance of their work and also lead me to elevate my rating accordingly.
Summary: This paper proposes a contrastive learning approach for representation learning in the problem of hierarchical achievement discovery. The proposed method leverages previous contrastive learning loss and combines it with optimal transport. Empirical results show that the learned representation could improve PPO in the Crafter environment. Strengths: 1. It is interesting that self-supervised representation learning can improve PPO, which significantly outperforms model-based approaches, even regarding sample efficiency. 2. Experimental results show the strong performance of the proposed method in the Crafter environment. 3. Presentation: this paper is generally well-organized and easy to follow. Weaknesses: 1. Domain knowledge: unlike baselines, the proposed method utilizes additional important knowledge, i.e., identifying when a new achievement is unlocked (which can be easily inferred from the observed rewards). This other knowledge can easily explain why the achievement prediction of the proposed method is much better than PPO. 2. Generalization: this paper only shows its results in the Crafter environment. It is highly recommended to conduct experiments in other environments to show its generality. 3. Novelty: though it is interesting to use optimal transport for matching achievements, the proposed method is a natural application of contrastive learning, which is not quite novel. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper defines MDPs with hierarchical achievements. Has prior work studied this model? If so, please add references. 2. It is interesting to see PPO is more sample-efficient than model-based methods in this paper’s experiments. Except for implementation practice improvement of PPO, any other insights for this result? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper has discussed its limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the review's constructive and helpful feedback. We are encouraged by the reviewer’s positive comments (“it is interesting that self-supervised representation learning can improve PPO”, “this paper is generally well-organized and easy to follow”). We would like to address the questions and concerns raised by the reviewer, as detailed below. --- **Q1: The proposed method utilizes additional important knowledge.** We agree with your view that our method utilizes additional information about the reward structure, where each reward signal represents distinct achievements. Nonetheless, we would like to emphasize that we employ the same reward function as the baseline methods without any modification and do not leverage any other information regarding achievements beyond the reward function itself. **Q2: It is highly recommended to conduct experiments in other environments to show its generality.** Thank you for your recommendation. We conduct experiments on two additional benchmarks that feature hierarchical achievements: Procgen Heist and a custom MiniGrid environment. Please refer to the Global Response Q1 for more details on the benchmarks and our experimental results. **Q3: This paper defines MDPs with hierarchical achievements. Has prior work studied this model? If so, please add references.** The concept of MDPs with hierarchical achievements has been studied in recent prior work [1, 2]. While we have cited these studies in the related work section, we will also include these references in Section 2.1 for further clarification. Thank you for the suggestion. **Q4: It is interesting to see PPO is more sample-efficient than model-based methods in this paper’s experiments. Except for implementation practice improvement of PPO, any other insights for this result?** In general, model-based algorithms exhibit greater sample efficiency than their model-free counterparts, provided that the learned models are accurate. However, training accurate world models on procedurally generated, partially observable environments poses a significant challenge. When reproducing the results of DreamerV3, we find that the losses for the world model gradually increase as training continues. It is worth noting that an agent in Crafter encounters a new environment in each episode. Given this, the world model of DreamerV3, which relies heavily on prior experience from the replay buffer for training, does not transfer well to unseen environments. This challenge has been underscored by a recent study, which further demonstrates that a Dreamer agent struggles to adapt rapidly to the ever-changing environments [3]. In contrast, PPO usually updates the policy and value networks using Monte Carlo methods with recently collected episodes, and does not heavily rely on models trained with prior experience, compared to DreamerV3. Additionally, we observe a rapid drop in the policy entropy of DreamerV3, reaching a level of 0.5 at timestep 50K and remaining low thereafter. In contrast, PPO maintains the policy entropy above a level of 1.0 by the end of the training process. This phenomenon limits a DreamerV3 agent's ability to explore environments and unlock new achievements. --- We again thank the reviewer for giving constructive suggestions. We hope our explanation above addressed all reviewer's questions. --- **References** [1] Robby Costales et al., Possibility Before Utility: Learning And Using Hierarchical Affordances, ICLR 2022. \ [2] Zihan Zhou and Animesh Garg, Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward, ICLR 2023. \ [3] Isaac Kauvar et al., Curious Replay for Model-based Adaptation, ICML 2023. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response, which partially addresses my concerns. However, I am not entirely convinced that PPO would be more sample-efficient than Dreamer, as Dreamer seemed able to learn good models in the experimental environment. I guess it is possible that DreamerV3 has much more parameters than the proposed methods. I wonder if the authors have tried fewer parameters for DreamerV3. Anyway, I will raise my score to 6.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort in providing valuable feedback. We are especially appreciative of the encouraging comments we received from each of them. To begin our response, we would like to first address some of the common concerns that have been raised by multiple reviewers. --- **Q1: Application to other environments** We conduct experiments on two additional benchmarks featuring hierarchical achievements: Procgen Heist [1] and a custom MiniGrid environment [2]. Heist is a procedurally generated “Door Key” environment where the goal is to steal a gem hidden behind a sequence of blue, green, and red locks, as illustrated in Figure 1 of the attached file. To open each lock, an agent must collect a key with the corresponding color. We consider opening each lock and stealing a gem as achievements. It is worth noting that Heist introduces another challenge, given that the color of wall and background can vary between environments, whereas Crafter maintains fixed color patterns for its terrains. Moreover, there is only a single pathway to unlock an achievement in Heist, while multiple routes exist to unlock an achievement in Crafter. For instance, an agent can readily gather wood almost everywhere on the map due to its abundance. To ensure closer alignment with Crafter, we slightly adjust the reward structure so that an agent receives a reward of 2 for opening each lock and a reward of 10 for successfully stealing a gem. We train an agent in the “hard” difficulty mode for 25M environment steps and evaluate its performance in terms of the success rate of gem pilfering and the episode reward. Additionally, we create a customized "Door Key" environment using the MiniGrid library to assess the effectiveness of our method on a larger achievement graph. The design of this environment takes inspiration by TreeMaze proposed in SEA [3]. An agent must sequentially unlock doors, find keys, and ultimately reach the green square, as depicted in Figure 2 of the attached file. The environment comprises a total of 10 achievements, such as opening doors, collecting keys, and reaching the goal. An agent receives a reward of 1 for unlocking a new achievement, mirroring the reward structure in Crafter. We train an agent for 1M environment steps and evaluate its performance in terms of the geometric mean of success rates and the episode reward, following the same protocol as Crafter. Figure 3 and 4 of the attached file illustrate the performance of our method and PPO. Remarkably, our method significantly enhances the performance of PPO in Heist, elevating the success rate from 29.6% to 71.0%. Furthermore, our method outperforms PPO in the MiniGrid environment by a substantial margin, increasing the score from 3.33% to 8.04%. These results highlight the broad applicability of our method to diverse environments with hierarchical achievements. **Q2: Application to value-based RL algorithms** We evaluate our contrastive learning method in conjunction with a popular off-policy value-based algorithm QR-DQN [4] on Crafter for 1M environment steps and observe its strong performance. Specifically, we apply our contrastive learning to the Q-network encoder at intervals of every 8000 environment steps. We employ Huber quantile regression to preserve the Q-network’s output distribution in a manner congruent with the value function optimization in QR-DQN. Figure 5 of the attached file demonstrates that our contrastive learning method is also effective in value-based algorithms, enhancing the performance of QR-DQN from 4.14 to 8.07. --- We hope that our response addresses all the questions and concerns of the reviewers. Please let us know if there are any further questions. --- **References** [1] Karl Cobbe et al., Leveraging Procedural Generation to Benchmark Reinforcement Learning, ICML 2020. \ [2] Maxime Chevalier-Boisvert et al., Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks, arXiv 2023. \ [3] Zihan Zhou and Animesh Garg, Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward, ICLR 2023. \ [4] Will Dabney et al., Distributional Reinforcement Learning with Quantile Regression, AAAI 2018. Pdf: /pdf/f68241477f9ce7e454a74a78d4b966a3dc0868e8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Riemannian Residual Neural Networks
Accept (poster)
Summary: EDIT: Having read everything here. I am increasing my score slightly. However, I still think the paper is not clearly explained. It is unclear why you first introduce the method without the feature map. It seems like there are two different versions of the method and it is not clear which is which. I am also confused by the experimental set up for datasets such as CORA. Do you first embed the graphs into a manifold? Additionally, I think that requiring a closed form expression for geodesics is somewhat strong and limits the applicability. The authors claim to attempt RESNET structures to hyperbolic manifolds Strengths: The paper is applicable to a wide variety of manifolds Weaknesses: The proposed methods seem to not really leverage the manifolds intrinsice geometry and depend entirely on the embedding on the manifold into ambient space $R^D$. Indeed $n_i$ is defined on all of $R^D$. This means that there is no gauranteed that there would be any notion of consistence if $M$ was embedded into $R^D$ and $R^{D'}$ two different ways (where $D'$ may or may not equal $D$). This seems to be a major limitation of this that is not properly discussed. at a bare minumum, there should be some level of invariance to, e.g., Rotations and Translations of the manifold in $R^D$ after its been embedded. Additionally, much of the paper is hard to understand such as the construction of the feature maps, which appears to take place in local coordinate systems which will not be consistent across the manifold. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The proposed methods seem to not really leverage the manifolds intrinsice geometry and depend entirely on the embedding on the manifold into ambient space $R^D$ A: Our approach uses intrinsic geometry (we use local coordinates, as mentioned by the reviewer). --- > Indeed $n_i$ is defined on all of $R^D$. This means that there is no gauranteed that there would be any notion of consistence if $M$ was embedded into $R^D$ and $R^{D’}$ two different ways (where $D’$ may or may not equal $D$). This seems to be a major limitation of this that is not properly discussed. at a bare minumum, there should be some level of invariance to, e.g., Rotations and Translations of the manifold in $R^D$ after its been embedded. A: $n_i$ is not invariant, but this is a necessary cost to realize our manifolds for computation and is standard practice in the literature [36, 38]. However, our feature map-based layers are notably invariant to embedding, which was the motivation for introducing them. Our feature maps are invariant to local coordinates and are consistent (e.g. SPD eigenvalues are invariant to conjugation and the hyperbolic space construction is invariant to choice of model).
Summary: This paper extends the well known residual network which are usually applied to Euclidean data to a variant defined on manifold. The main novelty is to replace the "addition / plus" operation in Euclidean space to exponential operation. Specifically, given an input point on a certain manifold (e.g., hyperbolic or SPD), it first learns a vector in the tangent space of the given input point through a neural network layer, and then maps the learned vector back to the manifold using exponential. By utilizing pushforward and pullback operations, the proposed method can transform input between different manifolds with different dimensions. Experiments are conducted on hyperbolic and SPD spaces to demonstrate the superior performance of the proposed method over HNN and SPDNet. Strengths: This paper proposed to extend residual network from Euclidean space to non-linear manifold by replacing the conventional addition/plus operation with manifold exponential operation. The theoretical part is sound and the experiments are effective in supporting the proposed method. Weaknesses: The biggest weakness, as also mentioned in by the authors at the end of the paper, is whether the proposed method is only applicable to hyperbolic and SPD matrix? Is it possible to apply this residual network to other non-linear manifolds that have closed-form exponential manifolds? Furthermore, Is it possible to apply this residual network to other non-linear manifolds that do $\textbf{not}$ have closed-form exponential manifolds? If so, please list such manifolds. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In general: 1. Except for Euclidean, hyperbolic and SPD, any other manifold are applicable to the proposed residual network? Method part: 2. For the function h_i defined in Line 202 parameterized by a neural network, given an input on a mainfold M_{i-1}, how to ensure the output is also on a manifold M_i? 3. For SPD matrix, what's the advantage of the proposed Riemannian Residual Neural Networks over SPDNet (the AAAI paper) in theory? 4. In Line 225 - 231, what's the relationship between f_i and g_{\theta_i}? 5. Still in Line 225 - 231, why does $\nabla g_{\theta_i}$ is a map from M to TM? 6. In SPD case of Line 258 - 260, since g_k does not contain any learnable parameters, how to train such a network for SPD? Experiment part: 7. How many links on average per graph used in the experiment in Section 5.1.1? It is important to verify the proposed method could perform well on medium-to-large datasets. 8. In Table 2, the standard deviations of the proposed method is obviously larger than SPDNet, does it imply the proposed method is unstable than SPDNet? (Similar phenomena also happen in Table 1, though not that obvious.) Implementation part: 9. As some parts (e.g., Line 225 - 231) are not that obvious to implement, will the code be released? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mentioned the limitations of the proposed method in Line 357 - 359. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and the constructive comments. We appreciate that you think our idea is theoretically sound and our experiments are supportive. We address your comments from the “Weaknesses” and "Questions" sections below. --- > The biggest weakness... closed-form exponential maps? A: Riemannian ResNets can be applied to more than just hyperbolic space and the manifold of SPD matrices; the answer to both questions posed is an unequivocal “yes.” So long as a closed-form exp map is provided, our method becomes immediately applicable. In fact, we include experiments on spherical space (which does have a closed form exponential map) in sections C.11 through C.13 of the appendix. We provide a general feature map construction in B.5 which is applicable to any Riemannian manifold. Please see the question below for an explicit list of some of these manifolds. --- > Except for Euclidean, hyperbolic and SPD, any other manifold are applicable to the proposed residual network? A: Yes, as mentioned above our method is easily applicable to myriad manifolds assuming there is a closed form exponential map provided. Common cases outside of the Euclidean, hyperbolic, and SPD manifolds include the manifold of spherical space and the Grassmannian manifold. Every matrix Lie group is also included, so for example the manifold of special unitary matrices (SU(n)), the manifold of special orthogonal matrices ($SO(n)$), and the manifold of all invertible matrices ($GL(n, \mathbb{R})$). --- > Is it possible to apply this residual network to other non-linear manifolds that do not have closed-form exponential manifolds? A: We note that without a closed-form exponential map, we would need a differentiable ODE solver for the geodesic equations, immediately complicating the computation by a significant amount. However, we note that this is not a considerable limitation of our method in that nearly all prior work in this subfield requires at least a closed-form exponential map, see for examples Neural Manifold ODEs [36], Riemannian Continuous Normalizing Flows [38], and Riemannian Convex Potential Maps [https://arxiv.org/abs/2106.10272]. --- > For the function $h_i$ defined in Line 202 parameterized by a neural network, given an input on a manifold $M_{i-1}$, how to ensure the output is also on a manifold $M_i$? A: We assume in our construction that $h_i$ maps $M^{(i-1)}$ to $M^{(i)}$. In general, one has to carefully construct these maps to ensure the output stays on manifold. As a concrete example, our h_i for the SPD case map from an SPD matrix of one dimension to another by conjugating with a Stiefel matrix [26]. --- > For SPD matrix, what's the advantage of the proposed Riemannian Residual Neural Networks over SPDNet (the AAAI paper) in theory? A: Besides the empirical benefits over SPDNet, theoretically, our work also generalizes SPDNet. SPDNet only operates on matrices under the log-Euclidean metric, which endows the manifold with flat geometry (the sectional curvature is zero at every point). This model fails to provide a way to capture more nontrivial geometry of the SPD manifold, a drawback that our approach removes. We demonstrate that the proposed Riemannian ResNet can learn over the log-Euclidean metric, and the affine-invariant metric, which has non-constant sectional curvature. In Appendix A.2, we provide a table comparing the different operations involved with both metrics, indicating that the difference is substantial and we provide nontrivial flexibility. --- > In Line 225 - 231, what's the relationship between $f_i$ and $g_{\theta_i}$? A: Each coordinate of the output of f is the output of a $g_{\theta_i}$, i.e. $f(p) = (g_{\theta_1}(p), g_{\theta_2}(p), …, g_{\theta_k}(p))$. --- > Still in Line 225 - 231, why does $\nabla g_{\theta_i}$ is a map from M to TM? A: $\nabla g_{\theta_i}$ maps from M to TM because the gradient generates a vector field (tangent to the underlying space). If you wish to see an explicit reference, please refer to Chapter 13 of Introduction to Smooth Manifolds [33] where on page 342 (of the second edition) the gradient of $f$, a real-valued function over a smooth manifold is defined. --- > In SPD case of Line 258 - 260, since g_k does not contain any learnable parameters, how to train such a network for SPD? A: While the feature map itself has no learnable parameters, we incorporate learnable parameters from other parts of the neural network. For instance, we can learn over the extracted features $g_k$ with other neural network layers to obtain a vector field. Appendix B.2.3 describes this in detail. > How many links on average per graph used in the experiment in Section 5.1.1? It is important to verify the proposed method could perform well on medium-to-large datasets. A: We provide the total number of edges and the number of edges used in training for each of the graphs below: - Total edges - Airport: 18631 - Disease: 2664 - Cora: 5278 - Pubmed: 44327 - Train edges - Airport: 15837 - Disease: 2265 - Cora: 4488 - Pubmed: 37679 As can be seen, most of the edges are used as links for training. These datasets range from small/medium to relatively large (e.g. Pubmed). --- > In Table 2, the standard deviations of the proposed method is obviously larger than SPDNet, does it imply the proposed method is unstable than SPDNet? A: Although in Table 2 the standard deviations for our method are larger than those for SPDNet for the datasets of AFEW and HDM05 (arguably, they are similar for the datasets of FPHA and NTU), in Figure 5 (referenced in Appendix C.8), we find that our proposed method converges faster and to a higher value than SPDNet, despite sometimes experiencing increased standard deviation. In particular, even for HDM05, which has higher standard deviation in the table, this is exceptionally clear. --- > Will the code be released? A: We will release code for our experiments. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. It addressed my concerns. As the authors promised to release their code, I will keep my original rating.
Summary: The paper generalizes the ResNet layer to non-euclidean geometries by replacing the Euclidean sum with the exponential map. The theory is general and applies to any smooth metric. They propose a way to parametrize a vector field on the manifold which is more geometrically principled than the trivial vector field embedding. Empirically they show improvement in performance on hyperbolic datasets and PSD spaces compared to some baselines. Strengths: The paper theory is general and applies to any smooth manifold metric. In ResNet a neural network produces a vector field and the output of the ResNet is the input plus such vector field. The proposed generalization, assuming access to a vector field on the manifold, defines the output of the ResNet as the exponential map of the input in the direction of such a vector field. This is consistent with the Euclidean case. A vector field in Euclidean space can simply be the output of a neural network, while on a manifold more care is required. Using an embedded vector field is straightforward, but, as correctly pointed out by the authors, it is not very principled geometrically. The authors propose a computationally tractable and geometrically principled way of defining a parametric vector field on a manifold. The idea is to make use of a collection of $\mathbb{R}$ valued functions (obtained as a projection on hyperplanes or similar) to define, through push forward and pullback, a vector field on the manifold. Extensive specific examples of such a collection of functions are given for the hyperbolic and PSD manifolds. Experiments are also performed in these cases, and the proposed approach appears to outperform the current state of the art. In the general manifold case (Appendix B.5), the idea of projection on pseudo-hyperplane is appealing and well-argued. And the further generalization to “hyper-disks” allows proper formal extension also to the non-geodetically-complete manifolds. Weaknesses: The definition of the vector field (Appendix B.4) is not sufficiently formal and contains mistakes. Specifically, the author assumes access to a smooth function $f:M\rightarrow \mathbb{R}^k$, a so-called “feature map”. The differential $D_x f$ is then a linear map from the tangent space in $x$, $T_x M$ to the tangent space in $f(x)$, $T_{f(x)} R^k = R^k$. Observing that the dual of $R^k$ is isomorphic to $R^k$ itself, the pullback of the differential $(D_x f)^*$ can be seen as a map from $R^k$ to $(T_x M)^*$, for every $x\in M$. This function is evaluated in $f(x)\in R^k$ such that the vector field (as defined in line 215) is a map $$ l_f: x \rightarrow (D_x f)^*(f(x)) $$ which, as we saw, take values in $(T_x M)^*$ and NOT on $T_x M$, as line 215 is saying. We would like to see this inconsistency explained. Is this based on the observation that both $(T_x M)^*$ and $T_x M$ are isomorphic to $\mathbb{R}^{dim(M)}$? And also, can you reason about the choice of evaluating the pullback of the differential (line 215) in $f(x)$? Is this somehow principled? The equivalence with Euclidean ResNet shown in Appendix D is a proper extension in the case of an embedded vector field, but it is rather difficult to follow in the case of the feature map. Specifically, Proposition 1 is trivially proved for an embedded vector field, but the same argument should also apply to the case of a feature-map-induced vector field. It would be helpful if the discussion on $g_{w,b}$ could e.g. focus on the case of axis-aligned planes (i.e. each $w$ should be an element of a standard basis and $b = 0$), such that the differential $D_x f$ reduces to an identity. We found this to be significantly more intuitive. In the general manifold case (Appendix B.5), the idea of projection on pseudo-hyperplane is, although well explained, not at all investigated. First, it is not clear how to practically implement such projections of these pseudo-hyperplanes (not in the geodesically complete case, and even worse in the general case). Second, there are no experiments regarding general manifolds, and there is also no dissertation on the increased computational complexity with respect to hyperbolic and PSD cases. This reduces “contribution 3” in the statement in lines 72-76. *Minor:* * The related work section argues that the proposed construction is different from a neural ODE (which also generalizes ResNets), but honestly, we found this argument to be incomplete. It does seem like the proposed construction is a neural ODE. * Line 126, we found it unclear which "specific structure" is being exploited in methods using Frechet means (these averages apply on practically all manifolds). *Regarding the score:* We are willing to increase the score if these concerns are appropriately discussed in the rebuttal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please reply to the weaknesses above. Furthermore: Major: * Line 215 and Appendix B4. The function $l_f(x)$ is inconsistent. In the main paper is a map $M \rightarrow T M$ while in the appendix is a map $M \rightarrow (T M)^*. Can you explain the inconsistency? Minor: * Line 189 $f_nn$ should be $f$? * The Log-Euclidean metric amounts to a Euclidean metric in the tangent space at the identity. In that case, does $f$ reduce to being $f: T_I M \rightarrow R^k$? If so, is your construction just a standard Euclidean ResNet in a pre-specified tangent space? That would be good to state explicitly, if so. * It's quite common to run into numerical instabilities (esp. on hyperbolic manifolds). Is that something you face? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The parting "Limitations" paragraph discusses a relevant assumption. However, we feel that a discussion of the tractability of (projections onto) "pseudo-hyperplanes" (Eq. 17 in the appendix) is lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and the constructive comments! We address your comments from the “Weaknesses” and “Questions” sections below. --- > The definition of the vector field... are isomorphic to $\mathbb{R}^{\dim(M)}$? A: Since our manifold is equipped with a Riemannian metric, there is a canonical isomorphism, induced by the metric, between $(T_x M)^*$ and $T_x M$. Composing the differential operator $(D_x f)^* : R^k \rightarrow (T_x M)^*$ with this yields the map that we notatate as $(D_x f)^*_r : R^k \rightarrow T_x M$ in Appendix B.4, which we precompose with $f$ to obtain $\ell_f : M \rightarrow TM$. For more information about the isomorphism between $(T_x M)^*$ and $T_x M$ we encourage the reader to consult “The Tangent-Cotangent Isomorphism” section from chapter 13 of [33]. For simplicity, by abuse of notation we overload $(D_x f)^*$ to mean $(D_x f)^*_r$ in the main paper, and point the reader to appendix B.4 for details. However, we agree that this is confusing and will add details to the main paper about this step, thereby resolving any inconsistencies. --- > And also, can you reason... somehow principled? A: We can describe the idea here, more broadly. The differential provides a natural way to map from $T_p M$ to a Euclidean space (since $T_{f(p)} R^k$ is Euclidean). We seek a natural map into the tangent space, so we take the pullback to obtain a natural map from $R^k$ into the dual space $(T_p M)^*$, and then dualize (i.e. use the tangent-cotangent isomorphism) to obtain a map from $R^k$ into $T_p M$, as desired. One motivation is that in the Euclidean case, because the maps $f(x)$ and $(D_x f)^*$ are linear, $\ell_{f}$ will reduce to a standard linear layer, which, in combination with the Euclidean $\exp$ map, will produce a standard Euclidean residual neural network. --- > The equivalence with... more intuitive. A: Yes, you are precisely correct. You can generalize the argument in Proposition 1 to the case of feature map-induced vector fields, by noting that if feature maps are projections to standard axis-aligned planes, the same reduction happens as that which was shown to hold in Proposition 1 for embedded vector fields. --- > In the general manifold... the statement in lines 72-76. A: The idea of the general manifold case feature projection is proposed mostly as a natural theoretical extension of the feature projection theory we had been developing up until that point. Please note that we tested this approach experimentally in Appendix C.4 for the hyperbolic case, and compared the performance of these pseudo-hyperplanes with the horosphere-projection based feature map approach. However, in general, applying this method may require more specific investigation of the manifold over which optimization is occurring (i.e. there may be an easier approach to obtaining geodesic distances than explicitly solving the geodesic equations). On a separate note, our claim stated in lines 72-76 refers to the ability to change the metric on the same manifold, as done in the SPD experiments in Section 5.2. --- > (minor) The related work section... is a neural ODE. A: First, it is worth pointing out that the construction is quite different from a neural ODE, in that the vectors exist on the tangent space of a manifold, not just in $R^n$. Second, speaking of neural manifold ODEs, for which the residual vectors exist in tangent spaces, our construction is effectively a generalized neural manifold ODE. Looking more closely at a manifold ODE [36, page 4], we see the neural network depends on time and generates a “flow”. A riemannian ResNet requires only the provision of a vector field, entirely untethered from any notion of solving an ODE/untethered from a time variable. This makes it a strict generalization, suitable for use in a general manifold neural network context. --- > (minor) Line 126, we found it... all manifolds. A: We meant mostly that the Helgason-Fourier construction exploits a fairly particular structure, but we also believe it is worth noting that weighted Frechet means are specifically introduced for convolution, which is not the focus of our work (we focus on residual connections). We will augment the writing to make this clear. --- > (question, major) Line 215... explain the inconsistency? A: By the tangent-cotangent isomorphism, we pass from $(T_p M)^*$ to $T_p M$. Please see our above comments regarding this for more details. The map we actually use in practice goes from $M$ to $TM$. We will make this explicit for clarity. --- > (question, minor) Line 189 f_n n should be f? A: Yes, thank you for pointing out this typo. This will be fixed. --- > (question, minor) The Log-Euclidean metric... if so. A: For the embedded vector field, the answer is yes. This is one shortcoming we observed with SPDNet, which only operates with the Log-Euclidean metric, motivating our additional use of the affine-invariant metric. For the feature map-induced vector field, where the feature maps are eigenvalue projections, the answer is no. Learning happens by way of repeated spectral remapping. --- > (question, minor) It's quite common... you face? A: This is something we have encountered, but there are ways of limiting the effect of such instabilities on the results. In particular, for the Poincare ball model we use, instabilities occur at large distances away from the origin, where $r \in (1-\epsilon, 1)$ for $\epsilon < 0.001$. As a consequence, we limit the hyperbolic distances and do projections in order to keep our optimization numerically stable. But an approach that is fully naive will likely run into some numerical difficulties, as you suggested. --- > (limitations concern) The parting "Limitations" paragraph... is lacking. A: We will add a separate discussion about tractability of pseudo-hyperplanes. We give this construction as an appealing theoretical generalization, but application to particular manifolds will require some care. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: We thank the author for the reply. However, we would like to ask for further clarification. > One motivation is that in the Euclidean case, because the maps ... will produce a standard Euclidean residual neural network. This is very interesting and can be a sufficient motivation for the choice of evaluation on $f(x)$. Can the author provide step-by-step reasoning and proof of this statement? Why will $l_f$ reduce to a standard linear layer? --- Reply to Comment 1.1.1: Title: Further clarification Comment: We can certainly do so. Note for the Euclidean case, that our feature map $f : \mathbb{R}^n \rightarrow \mathbb{R}^k$ will, for example ($b=0$, $W$ has normalized row vectors), take the form $f(x) = Wx, W \in \mathbb{R}^{k \times n}$. Then note that we have $Df = W$ and $(Df)^* = W^T$. We see for the standard feature map-based construction, our vector field $\ell_f (x) = (D_x f)^* (f(x))$ takes the form $\ell_f (x) = W^T W x$. For the learnable case (which is standard for us, given that we learn Riemannian residual neural networks), note from Lines 217-219 that we have $\ell_{f,\theta} (x) = (D_x f)^* (n_\theta (f(x)))$ for $n_\theta$ a neural network. Hence we have $\ell_f (x) = W^T n_\theta (W x)$. For the case that you mentioned before, i.e., when the feature maps are trivial projections (onto axis-aligned hyperplanes), we have $W= I$ and $\ell_f (x) = n_\theta(x)$. Thus our construction can be viewed as a generalization of a standard neural network.
Summary: The paper proposes an extension of standard ResNets called Riemannian Residual Neural Networks. The extension is done based on Riemannian manifolds as discussed in Equation (2). Some numerical results on node classification problems are presented in section 5 to show the improvements of the proposed generalization of ResNets. -- Post-rebuttal Review Update -- I thank the authors for the detailed responses to my comments. I find the responses satisfactory and raise my score to 6. Strengths: 1- The idea of Riemannian ResNets sounds interesting and as the numerical results suggest could help improve the performance of ResNet models in applications where the chosen Riemannian geometry suits the dataset. Weaknesses: 1- The paper's presentation remains abstract in the main body, and I do not find the current presentation accessible enough to deep learning practitioners. For example, Section 3 spends about 2.5 pages explaining Riemannian geometry but does not discuss a concrete example where the exponential map and vector fields can be discussed. The examples in section 4.2.1 appear late in the draft and also do not derive the expression for the exponential map that appears in Riemannian ResNets. 2- Since the paper has not discussed the algorithmic steps of training and evaluating a Riemannian ResNet, it is not that easy to see how the network can be trained for non-Euclidean Riemannian geometries. I suggest adding one or two algorithms to the draft to discuss the steps of training a Riemannian ResNet for the cases discussed in Section 4.2.1. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How did the computational costs of training a Riemannian ResNet compare to that of a standard ResNet? Could the Riemannian ResNet demand more computational power for training than the normal ResNet? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see my previous responses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and the constructive comments. We appreciate that you think our idea is interesting and has the potential to improve the performance of ResNets over datasets with chosen Riemannian geometry. We address your comments from the “Weaknesses” section below as well as your questions from the “Questions” section. --- > The paper's presentation remains abstract in the main body, and I do not find the current presentation accessible enough to deep learning practitioners. For example, Section 3 spends about 2.5 pages explaining Riemannian geometry but does not discuss a concrete example where the exponential map and vector fields can be discussed. The examples in section 4.2.1 appear late in the draft and also do not derive the expression for the exponential map that appears in Riemannian ResNets. ‎ Since the paper has not discussed the algorithmic steps of training and evaluating a Riemannian ResNet, it is not that easy to see how the network can be trained for non-Euclidean Riemannian geometries. I suggest adding one or two algorithms to the draft to discuss the steps of training a Riemannian ResNet for the cases discussed in Section 4.2.1. A: We apologize for any confusion. Due to page limits, we decided to offload these details to the appendix. We would like to refer the reader to the Appendix where we extensively document how to implement a Riemannian ResNet: 1. In Appendix A, we provide a table of the expressions we use to compute the exponential map on various manifolds. All operations can be performed with standard PyTorch operations. 2. In Appendix B, we elaborate on the Riemannian ResNet design presented in Section 4, discussing the tradeoffs of each design. 3. Appendix C outlines more experimental details. In Appendix C.7, we give a concrete example of how we constructed a Riemannian ResNet for covariance matrix classification. We will include more of these details in the main body in a revised version. --- > How did the computational costs of training a Riemannian ResNet compare to that of a standard ResNet? Could the Riemannian ResNet demand more computational power for training than the normal ResNet? A: One of the key benefits of a Riemannian ResNet is that it can capture geometric invariants of the training data. This can reduce the parameter count of the neural networks used, improving both performance and efficiency. For example, because the Riemannian ResNet for SPD matrices acts on eigenvalues, it is invariant to change of eigenbasis. While the size of a $n \times n$ matrix grows at an $O(n^2)$ rate, the number of eigenvalues grows at only an $O(n)$ rate, leading to computational efficiency. We demonstrate these benefits in Appendix C.10, where Euclidean ResNet performs worse than the Riemannian ResNet across all SPD datasets. However, computation can be a challenge when closed-form solutions to the exponential maps are unknown. We believe that our work can highlight these computational benefits of geometric machine learning, and will clarify this in a revised version. --- Rebuttal 2: Title: Follow-up Comment: Dear reviewer, We would like to ask if we have addressed your concerns, and if so, if it would be possible to raise your rating for the paper? If any additional questions have arisen, please let us know.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Precise asymptotic generalization for multiclass classification with overparameterized linear models
Accept (spotlight)
Summary: The paper studies the asymptotic generalization error behavior of an overparameterized linear model and under the Gaussian covariates bi-level model. In this setup, the number of data points, features, and classes all grow together. The authors manage to fully characterize the regimes of the generalization error, which is surprisingly “polarized”. This solves the conjecture posed by Subramanian et al. (2022). Strengths: Both the achieved theoretical result and also the used technical tools (like the newly established Hanson-Wright inequality) are highly novel and strong. Even though I am not familiar with the literature, I estimate that the work is quite valuable. Weaknesses: - The considered assumptions for the distribution of features are very simplistic. Both independence and having identical Gaussian distributions are very restrictive; which highly influences the practicality of the results. - The paper studies the generalization error of linear models; which are far from the deep neural networks. In this sense, there is still a big gap in the practical aspects of the paper (and also the previous literature on this). However, it is totally understandable that these are the first steps toward that goal. - The paper lacks providing the needed intuitions about the obtained results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be useful to provide more intuition about the considered “bi-level ensemble” model. In this model, it seems that while we are considering $n^p$ dimensions, the ``effective dimensions’’ is the favored ones; meaning that the features in the rest of the dimensions (and their relative magnitude) somehow either add a “useful noise injection” or become dominant with respect to the “signal” which makes the prediction impossible. This might be inexact, but this is just to give an example of what kind of intuition I refer to. - Similarly, it would be extremely useful to discuss the different regimes of the theorems. Why having $r$ close to 1 would fail the learner? Why having a small $p-(q+r)$ would do so? - Finally, the scope of the paper remains a bit narrow (and purely technical) and the authors could probably use the intuitions derived from their results to discuss some potential understanding that these results could give about some more realistic setups. This is shortly done in the discussion section, but it would be appreciated to be extended. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: As discussed above, and I guess the authors would also agree, the considered model is quite restrictive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and detailed feedback. > The considered assumptions for the distribution of features are very simplistic. Both independence and having identical Gaussian distributions are very restrictive; which highly influences the practicality of the results. - Because our paper focuses on resolving the conjecture of Subramanian et al., we chose to adopt the same model and notation for ease of comparison. - With respect to independence, we agree that it is a strong simplifying assumption. However, it is not essential for every part of the argument. In several places, we use union bound, so independence is not really needed. Furthermore, in the Gaussian world, independence is assumed mainly just for exposition. As long as we assume the labels are being determined by a subspace of the features, we can do a basis change for the sake of analysis to make the covariance diagonal (and the learning algorithm need not know about this basis change). - Generally speaking, we made these simplifying assumptions for the sake of tractability. Of course, there is a need to move beyond flat covariances, but it requires more work. As we explain in the response to reviewer V1nE, we think that many of the same results hold if we allow the covariance to be heterogeneous along the non-label defining directions, but the situation likely becomes much more complex if the label-defining directions are allowed to be heterogeneous. - We think that going from Gaussian to subgaussian distributions will be fine if we use an appropriate vector notion of subgaussianity that allows us to perform basis changes. One place where the Gaussianity assumption is used explicitly is the margin computation, where we needed to use the fact that the gap between the max and second max decays at a logarithmic rate. It is certainly of interest to precisely identify where else we actually need the Gaussian assumption and where we can relax it. > The paper studies the generalization error of linear models; which are far from the deep neural networks. In this sense, there is still a big gap in the practical aspects of the paper (and also the previous literature on this). However, it is totally understandable that these are the first steps toward that goal. - We agree that like many other papers in the field, our work is just one of the first steps towards a more robust theory for deep networks. However, in finetuning settings where a nonlinear network is massively overparameterized with respect to the finetuning data, one can use the NTK approximation to study the finetuning behavior with overparameterized generalized linear models. Here, the pretraining of the model creates an implicit nonlinear lifting of the inputs into a high dimensional space, where every parameter of the model corresponds to one nonlinear feature in the lifting (via the first-order Taylor expansion of the model with respect to the finetunable parameters). It is an interesting question to determine whether real systems actually have the type of behavior predicted by our results. (In fact, the nature of our results point to the need for work in this direction because simply looking at the empirical eigenvalues of the training data might not reveal the existence of the underlying spiked structure that helps generalization work.) > The paper lacks providing the needed intuitions about the obtained results. - We will try to add some more intuition along the lines of your suggestion below. > It would be useful to provide more intuition about the considered “bi-level ensemble” model. In this model, it seems that while we are considering dimensions, the "effective dimensions" is the favored ones; meaning that the features in the rest of the dimensions (and their relative magnitude) somehow either add a “useful noise injection” or become dominant with respect to the “signal” which makes the prediction impossible. This might be inexact, but this is just to give an example of what kind of intuition I refer to. - Thanks for the suggestion about discussing the theorem. Your intuitions about the bi-level model are essentially correct. Below we answer your questions more directly. - The intuition for $ r \approx 1 $ being a failure mode is that the model is effectively severely overparameterized, even after restricting to the favored features. In particular, since there are $n^t$ classes, the failure condition $t + r > 1$ precisely captures the right sense of bad overparameterization restricted to the subspace of $n^r$ favored features. - The other condition about $ p - (q+r) $ is about the feature weighting for favored features: if the favoring is too small, classification will not succeed because the noise/contamination from the unfavored features will dominate the signal. We will include this discussion in the revision. - The paper's discussion of the "just average positive exemplars" (non-interpolating) classifier should also help give a complementary intuition for the failure conditions. > Finally, the scope of the paper remains a bit narrow (and purely technical) and the authors could probably use the intuitions derived from their results to discuss some potential understanding that these results could give about some more realistic setups. This is shortly done in the discussion section, but it would be appreciated to be extended. - We agree that the main contribution of our paper is technical. However, one consequence of the technical work is that it justifies the style of heuristic calculation that led to the conjecture in the first place (carried out in the appendix of Subramanian et al.). We believe that a similar type of heuristic reasoning can be more widely applied in more realistic settings, with tools such as our sharpened Hanson-Wright inequality potentially being useful to justify those calculations. --- Rebuttal Comment 1.1: Comment: To the authors: your response has been read and is being considered.
Summary: The paper titled "Asymptotic Generalization of Overparameterized Linear Models for Multiclass Classification under Gaussian Covariates Bi-level Model" presents a study on the asymptotic generalization of overparameterized linear models for multiclass classification under the Gaussian covariates bi-level model. The authors provide an asymptotic characterization of the generalization of a linear model for multiclass classification in an idealized Gaussian setting where a) the number of data points, b) the dimension, and c) the number of classes diverge while their ratio remains fixed. An interesting result, in particular, is that the min-norm interpolating classifier can be suboptimal in this regime. Strengths: The paper present many strong analytical results. In particular, the authors have successfully resolved a conjecture posed in a previous works (Subramanian et al. '22,) and established new lower bounds that demonstrate the misclassification rate either goes to 0 or 1 asymptotically. The paper also introduces a new variant of the Hanson-Wright inequality, a tool used in high-dimensional probability, that is particularly useful for multiclass problems with sparse labels. Sparse labels refer to situations where only a small number of classes are represented in the dataset. Weaknesses: Technical Quality: 3 good Clarity: 3 good Questions for Authors: How realistic is the Bi-level feature weighting model ? I am thinking instead of the source/capacity setting where I would expect a more power-law-like behavior. How different would the conclusions be? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper is theoretical in nature and its limitations are stated in the theorems Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and feedback. > How realistic is the Bi-level feature weighting model ? I am thinking instead of the source/capacity setting where I would expect a more power-law-like behavior. How different would the conclusions be? - Because our paper focuses on resolving the conjecture of Subramanian et al., we chose to adopt the same model and notation for ease of comparison. - You are correct that in applications for spiked covariance models, one typically sees power-law like behavior. We expect that the bi-level model can be relaxed to allow for constant deviations in the weightings for the $s-k$ favored features that are not label defining, and a power law decay for the $d-s$ unfavored features. The former change would likely only affect constants in certain areas of the argument that do not crucially depend on the exact constants involved, whereas the latter would likely just change the effective degree of overparameterization (à la Bartlett et al.). In this type of more relaxed bi-level setup, we expect an analogous result to hold, but ironing out these details is definitely an important next step. - However, even allowing for constant fluctuations in weighting for the label-defining features can lead to many subtleties. Even constant fluctuations in label defining weightings manifest as polynomial variations in how many examples of each class there are. Hence the heterogeneity between label-defining directions would likely lead to much messier condition for generalization. - On that note, if one is able to analyze the heterogeneity in label-defining features, one would naturally also capture a setup with imbalanced classes, which is more representative of real-life applications. We leave this as an interesting direction for future work.
Summary: This is a theoretical paper that gives insight into how and when an overparametrized linear classification model, for multi-class classification, can be successfully generalized. In particular, they look at multiclass classification under the Gaussian covariates bi-level model introduced by Subramamian et al. in 2022, and fully resolve a conjecture from that paper. The key to their analysis is a new variant of the Hanson-Wright inequality. Strengths: This paper builds on previous work to give tight bounds on the regions where generalization is possible, improving significantly on previous partial results. It also makes rigorous previous analyses based on heuristic calculations. This is a very well-written paper. Based on the proof sketch given in the main paper, the technical level of the proofs seems high, involving both the use of existing techniques as well as the proof of a new version of the Hanson-Wright inequality,. Weaknesses: No significant weaknesses were noted. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: It would be helpful to comment on the limitations of the bi-level model studied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review. > It would be helpful to comment on the limitations of the bi-level model studied. - As mentioned in the general comment, because the main contribution of our paper is resolving the conjecture of Subramanian et al., we chose to adopt the same model and notation for ease of comparison. - Certainly, the bi-level model, being a simplified caricature model, has its own limitations which make it a little unrealistic. For example, the bi-level structure implies that the training dataset has balanced classes, which is often unrealistic in real world scenarios with lots of classes. Relaxing some of these assumptions is an interesting direction for future work. See also our rebuttal to Reviewer V1nE for a more in depth discussion of how one might be able to relax the assumptions of the bi-level model. --- Rebuttal Comment 1.1: Comment: To the authors: your response has been read and is being considered.
Summary: In their main result, Theorem 3.2, the authors establish Conjecture 3.1 which is a conjecture posed in 2022 describing the asymptotic misclassification probability of the bi-level ensemble model (Definition 1) under a sparsity assumption (Assumption 1). They provide a rigorous and tight analysis, with very clear explanations both in-text and in the appendix. Strengths: The paper is clearly written and explained wonderfully, the proofs are detailed, and the contributions are interesting. The proofs are sound and clearly articulated. Minor point, the appendix is also very neatly organized which makes proofreading nice. **Update:** My questions and concerns have been addressed. Weaknesses: Overall the paper is rigerous and clear, so I do not really have any significant weakness worth reporting; only a few minor comments. * Only minor comments:* - In equations (36)-(40) could you explicitly write the polylogarithmic terms in the denominator, or at the very least, precisely defined them after the equation environments. - Could you formally state that all r.v. are defined on the same probability space, at the beginning of the paper; for rigor. - Very minor point, above equation (219), I guess $\boldsymbol{A}_{-\boldsymbol{S}}^{-1}$ is *block diagonal* not diagonal. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: There are two little details, which I didn't fully follow, so let me ask: - Perhaps a silly question, but in equation (182), are you using a Brascamp–Lieb inequality (or something simpler which I'm possibly missing)? - In equation (240) do you mean $n^{t-1}\hat{\boldsymbol{f}}_1[1]$ is $\Theta(n^{(p-q-2)/2})$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and suggestions. > In equations (36)-(40) could you explicitly write the polylogarithmic terms in the denominator, or at the very least, precisely defined them after the equation environments. - Yes, we will include the explicit polylog factors. For clarity, all of these terms are at most $\log(nsk)$. > Could you formally state that all r.v. are defined on the same probability space, at the beginning of the paper; for rigor. - We will add this to the paper as requested. The probability space for training can be viewed as being $n \times d$ iid standard Gaussians, and extended for each test point by another $d$ iid standard Gaussians. All random variables for us are either linear combinations of the underlying iid Gaussians or functions of those random variables. There is no other randomness in the setup. > Very minor point, above equation (219), I guess $\boldsymbol{A}\_{-S}^{-1}$ is block diagonal not diagonal. - We see how there might be some confusion with the wording of the argument here. We will explicitly add the definition of the matrices into the notation table. We will also replace the text leading up to (219) with the following: Since $\boldsymbol{A}\_{-S}^{-1}$ is symmetric, it has an eigendecomposition $\boldsymbol{V} \boldsymbol{D} \boldsymbol{V}^\top$, where $\boldsymbol{V}$ is an orthogonal matrix. Because $\boldsymbol{W}\_T$ is a weighted subset of only the (equally) favored features, its law is rotationally invariant. Furthermore, since $\boldsymbol{A}\_{-S}^{-1}$ is independent of $\boldsymbol{W}\_T$ (as $T \subseteq S$), we can absorb the rotation $\boldsymbol{V}$ into $\boldsymbol{W}\_T$ to reduce to the case where $\boldsymbol{A}\_{-S}^{-1} = \boldsymbol{D}$.. > Perhaps a silly question, but in equation (182), are you using a Brascamp–Lieb inequality (or something simpler which I'm possibly missing)? - There is a typo here (thank you so much for pointing it out); there should be an expectation on the LHS: it should be $$\mathbb{E} \exp(\lambda S_{\mathrm{diag}}) = \prod_{i=1}^n \mathbb{E}\_{X\_i, Y\_i} \exp(\lambda m\_{ii} (X\_iY\_i - \mathbb{E}[X\_iY\_i])),$$ which is just using independence of the $(X\_i, Y\_i)$ pairs. After that, we apply Jensen’s inequality to get to the next line. > In equation (240) do you mean $n^{t-1}\hat{\boldsymbol{f}}_1[1]$ is $\Theta(n^{(p-q-2)/2})$? - Yes, that is correct, we will fix the notation in the revision. --- Rebuttal Comment 1.1: Title: Happy with edits Comment: Dear authors, Thanks you very much for the clear response and very nice paper. Goodluck :)
Rebuttal 1: Rebuttal: We thank all of the reviewers for their comments and feedback. Below, we highlight some high level takeaways which address a set of questions shared across multiple reviewers. ## Bi-level model - Because the main contribution of our paper is fully resolving the conjecture of Subramanian et al. from NeurIPS 2022, we chose to adopt the same model and notation in the paper for ease of comparison. - The high level conceptual picture of the setup is as follows: The learner simply observes jointly Gaussian zero-mean features with some **unknown** covariance structure $\Sigma$ in a very high dimensional space, and performs min-norm interpolation of essentially one-hot-encoded labels to learn the score functions. These score functions are used at test-time to do multi-class classification. - Note that the learning algorithm has no knowledge of $\Sigma$, nor does it even know that the features are jointly Gaussian. *All the learning algorithm has is the training data.* - For analysis purposes, $\Sigma$ is parameterized. In the spirit of spiked covariance models, where a low-dimensional subspace has higher variance, we study the case that the *eigenvalues* of $\Sigma$ follow the simplified bi-level model parameterized by $(p, q, r)$. The bi-level model stipulates that there are two discrete variance levels, the higher of which lies in the low-dimensional subspace. For this bi-level model, we are able to prove sharp phase-transition style results telling us when successful generalization will happen. Here, the number of classes also matters, and that is where the parameter $t$ enters. - This bi-level model is a stylized linear version of the well-known manifold hypothesis, which stipulates that real-world high dimensional data actually approximately lie on a low rank manifold that is unknown to the learning algorithm. - To simplify notation in the paper, Subramanian, et. al. just assume $\Sigma$ is diagonal to begin with instead of explicitly rotating coordinates to the eigenbasis of a general $\Sigma$. Because min-norm interpolation only cares about norms, this is without loss of generality. The camera-ready version will make this setup clearer.
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors analyze the generalization of the linear multiclass classifiers in the overparametrized regime under the bi-level model with Gaussian covariates. In particular they prove a conjecture made in a previous paper about the region (characterized by the paramters of the bi-level model) under which the generalization error can go to zero. In fact, they establish a `0-1` law for generalization error: depending upon the parameter regime the probability of error either converges to zero or one. In the process of proving this result the authors also establish a generalization of the usual Hanson Wight inequality for quadratic forms to exploit "soft sparsity". Overall, I feel that this paper contains a technically rigorous analysis of the overparametrized classification problem somewhat restrictive model assumptions. Strengths: The paper is well written, and while the material is quite technical, the authors do make an effort to provide sufficient context and explanations to motivate them. In particular, the detailed overview of the argument used in proving the main result (Theorem 3.2) is quite helpful in getting the idea used in the proof. Weaknesses: 1. While this paper does not seem to have any obvious flaws, I feel that the model analyzed in this paper is a bit too restrictive. In a footnote on page 4, the authors mention that "such models are widely used to study learning even beyond this particular thread of work". It would be very helpful, if the authors include a detailed discussion of at least one such practical application, which is naturally modeled by the bi-level model studied in this paper. 2. Besides the model, even the construction of the classifier makes some strong assumptions. In particular, the classifier is constructed after reweighting the features with weights $(\lambda_i)_{i \geq 1}$ that are tuned to the specific bi-level model. Since in most practical problems, there is at least some level of misspecification, I feel that this reduces the practical utility of the results in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you include a discussion of some practical tasks where the bi-level model arises naturally? 2. Regarding the second point in the weaknesses section, can you discuss (at least informally) the effect of model misspecification on the generalization guarantees. More generally, do you think that the same performance guarantees as Theorem 3.2 (i.e., probability of error converging to zero for the same parameter ranges) can be achieved adaptively without using the knowledge of model parameters ($p, q, r, t$) to construct the classifier? Or there is a price to pay for achieving zero generalization adaptively? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think that the strong model assumptions used in this paper might reduce the practical utility of the results of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions. > While this paper does not seem to have any obvious flaws, I feel that the model analyzed in this paper is a bit too restrictive. In a footnote on page 4, the authors mention that "such models are widely used to study learning even beyond this particular thread of work". It would be very helpful, if the authors include a detailed discussion of at least one such practical application, which is naturally modeled by the bi-level model studied in this paper. - Please see the general comment for an overview and motivation of the bi-level model. To repeat the key takeaway here, the bi-level model is a stylized version of the spiked covariance model. - Spiked covariance models have been extensively studied in the statistics literature (see e.g. [1], [2]), and are a common assumption for many statistical applications. Some concrete theoretical models related to the spiked covariance model include the sparse/nonnegative/tensor PCA problems and stochastic block model. For real life applications, some examples (lifted from [1]) include climate studies and functional data analysis. >Besides the model, even the construction of the classifier makes some strong assumptions. In particular, the classifier is constructed after reweighting the features with weights $(\lambda\_i)\_{i \ge 1}$ that are tuned to the specific bi-level model. Since in most practical problems, there is at least some level of misspecification, I feel that this reduces the practical utility of the results in this paper. - As mentioned above, we tried to stay as consistent as possible with the notation and setup of Subramanian et al., who used this explicit reweighting procedure in the exposition of the setup so that their analysis would be notationally simpler. We would like to emphasize that this explicit reweighting is done purely for analysis purposes, and in reality the actual classifier does *not* do any reweighting of features. The classifier observes jointly Gaussian features with labels during training and does min-norm interpolation to learn the score functions. At test time, the test vector is fed into the $k$ score functions to produce the labeling by looking for the highest score. The classifier has no dependence on the $\lambda_i$, or $p, q, r, t$. It just sees the training data. - In the real world, misspecification is absolutely a real concern, so the reviewer’s concern about the role of misspecification is certainly warranted. So, an interesting question is to what extent is this classifier robust to distribution shift at test time? Roughly speaking, our intuition is that with small enough (non-adversarial) shifts, the margin that the classifier currently leverages to succeed would allow the classifier to still generalize. This is because our analysis is largely tied to the training distribution (establishing appropriate concentration, etc.). Quantifying this robustness is an interesting future direction and is in fact related to the fact that the min-norm interpolator generalizes for multi-label classification as well. We did not discuss the connection to misspecification in the paper at all, but if you think it is important please let us know and we will be sure to mention it in the revision. > Can you include a discussion of some practical tasks where the bi-level model arises naturally? - See the above discussion about spiked covariance models, and we will add some into the revision as well. Certainly, the bi-level model, being a simplified caricature model, has its own limitations which make it a little unrealistic. For example, the bi-level structure implies that the training dataset has balanced classes, which is often unrealistic in real world scenarios with lots of classes. Relaxing some of these assumptions is an interesting direction for future work. - The bi-level model is a tractable linear proxy for the more widely accepted manifold hypothesis about real-world high-dimensional data. > Regarding the second point in the weaknesses section, can you discuss (at least informally) the effect of model misspecification on the generalization guarantees. More generally, do you think that the same performance guarantees as Theorem 3.2 (i.e., probability of error converging to zero for the same parameter ranges) can be achieved adaptively without using the knowledge of model parameters $(p, q, r, t)$ to construct the classifier? Or there is a price to pay for achieving zero generalization adaptively? - The model parameters $(p, q, r, t)$ are only used to parameterize the data generating assumptions (the bi-level model), and the min-norm-trained classifier itself does not need to know any of these parameters to learn the score functions. Hence, the classifier defined in the paper will achieve the stated performance guarantees. - Of course, it receives $d = n^p$ dimensional features and knows that there are $k = n^t$ classes, but it is hard to think of a situation where a learning algorithm would not know these ahead of time. On the other hand, the classifier definitely does not need to know $q$ or $r$. One of the interesting results of our paper is that multiclass classification can succeed even in the regime $q+r>1$, where the empirical covariance is completely flat, despite the spike being present in the true covariance. Put differently, although the data approximately lies on a low dimensional manifold, you can’t see it from the empirical covariance given the limited amount of data available, so that it is naively impossible to recover $q$ and $r$. [1] Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Annals of Statistics, 29(2), 295-327. [2] Donoho, David L., Matan Gavish, and Iain M. Johnstone. "Optimal shrinkage of eigenvalues in the spiked covariance model." Annals of Statistics 46.4 (2018): 1742. --- Rebuttal 2: Title: Reply to the rebuttal Comment: Thank you for the clarifications about the bi-level model and model-misspecification. I am happy to update my score to 7.
Summary: By resolving the conjecture posed by Subramanian et al. (2022), the authors address the asymptotic generalization for overparameterized minimum-norm interpolation (MNI) linear multi-class classifiers under two assumptions: (1) features are Gaussian vectors and labels are generated from $1$-sparse noiseless model; (2) the scaling follows bi-level ensemble. This paper can be seen as a completion to the previous result of Subramanian et al. (2022) since it captures the generalization for both the regime where the regressor fails and regime where it works. The technical contribution appears to be a new variant of the Hanson-Wright inequality. Strengths: A sound completion to the previous theoretical result. Weaknesses: The setting of bi-level ensemble seems a bit weird to me, especially the eq (12). Maybe the authors can offer more explainations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 165: "it suffices for the maximum value of the LHS ......'' Is it the maximum instead of minimum here? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions. > The setting of bi-level ensemble seems a bit weird to me, especially the eq (12). Maybe the authors can offer more explainations. - See the general comment for an overview of the bi-level model. The bi-level model is a set of simplifying assumptions to more general spiked covariance models which distills out a minimal set of assumptions for theoretical tractability while being able to call out interesting phenomena. - The choice of the feature weights in eq (12) reflects a relatively natural normalization of the covariance of the feature vector so that $\mathrm{tr}(\Sigma) = d = n^p$. For example, this kind of trace would arise if each feature on its own had unit variance but there was some interesting correlation structure among the features. (Of course, we do all our analysis in the underlying rotated eigenbasis coordinates for $\Sigma$ for notational simplicity.) > Line 165: "it suffices for the maximum value of the LHS ......'' Is it the maximum instead of minimum here? - This is indeed a typo and we have fixed this in the revised version. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for the clarifications on eq(12) and it makes more sense to me now.
null
null
null
null
GraphMP: Graph Neural Network-based Motion Planning with Efficient Graph Search
Accept (poster)
Summary: The paper studies learning-based motion planners with GNNs. To improve the performance of GNN-based previous work, this paper proposes the GraphMP, possibly applied to low and high-dimensional planning tasks. The important idea is using (1) using predicted heuristic values, (2) NCC (Neural collision checker), and (3) differentiable A* modules (encoded with matrix operations) for efficient motion planning. The experimental results in Sec.4 showed that the proposed GraphMP performs better with limited computational times. Strengths: - The proposed concept using multiple components (e.g., learning modules, NCCs, differentiable A*) is clearly explained. Figure 3 and Algorithm 1 are good for following the concept. - The solid experimental results in various motion planning domains. Weaknesses: - I have no ideas except some assumptions (the input RGC is given and how to set the upper bound Tmax for unfamiliar environments). Technical Quality: 3 good Clarity: 3 good Questions for Authors: I can follow the concept and descriptions in the submitted version, and feel that the contributions are solid. Here are some comments and questions to clarify the paper. - (Related to the curse of dimensionality?): In the results in Table 1, KUKA7 seems to have a bit worse success rate than those KUMA14. Do you have any reasons for them? I feel that the problems having a larger DoF are harder than those with a lower DoF. - Regarding this point, Table 2 and Table 3 are reasonable for me because they have more DOFs. However, only for the success rates in Table 1, the results should be explained in my opinion (maybe, they are some random effects? If so, it is better to mention them, e.g., random seeds, trials, etc.). - (About the environment $G$): In the setting, RGG $G$ seems to be given and updated lazily. Can we learn G itself together with other learnable modules? - In the reported results, LNR seems to be effective in refining paths after A* is finished. Since the current LNR seems independent from the learning part, I guess that the performance depends on the given RGG; therefore, I conjecture that learning representations on graphs is an interesting direction (e.g., learning a mask matrix to prune nodes). - (About the learning setting): How can we set the max transition time $T_\mathrm{max}$ in the end-to-end learning procedure? What happens if $T_\mathrm{max}$ is not set adequately? - I feel that putting the larger $T_\mathrm{max}$ is enough to find feasible solutions (i.e., a path that reaches $v_g$), though it may cause some inefficient phenomena in practice. Do you have any evaluations and discussions? - (Neural heuristic estimators): The comparison is a bit trivial because vanilla A* adopts the fundamental heuristic functions. Do you have any discussions on Maze2 (where Valina A* showed better path costs)? Do you have any findings of learning heuristic values in your domains? - Other minor comments: - I prefer using explicitly an indicator vector $\mathbb{I}[\mathbf{b}_\mathbf{acc} > \mathbf{b}’_\mathbf{acc}]$ to $\mathbf{b}_\mathbf{acc} > \mathbf{b}’_\mathbf{acc}$ for clarity and following other literature in the ML. - $\mathbb{1}$ should be with its dimension (e.g., $\mathbb{1}_{|V|}$) for clarity. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no explicit concerns about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's very constructive comments and suggestions. The following is our response in order of questions and comments raised. >Q1: KUKA7 vs KUKA14 in Table 1. Thank you for pointing it out. The reported a bit worse success rate for KUKA7 than KUKA14 is due to random effects. The current result is based on a single random seed -- GraphMP achieves 99.3\% (round to 99\%) and 99.5\% (round to 100\%) rates on KUKA7 and KUKA14, respectively. After averaging the results using four different random seeds, the success rates on KUKA7 and KUKA14 become 99.4\% and 99.2\%, respectively, being consistent with the trend when DoF increases. We will update the results in the revised version by running with different random seeds. We appreciate your valuable comments. >Q2: Can we learn RGG? Thank you for the very valuable suggestion. We highly agree with that learning RGG is an interesting research direction. As we discuss in the limitation and future work **(Sec. 2 of Supplementary Material)**, the current RGG generation method is somewhat inefficient with involving unnecessary node and edge generation, bringing unnecessary exploration and affecting planning performance. In our future work, we plan to fully leverage the workspace topology and the spatial information to bias node sampling/edge construction and/or pruning the unnecessary nodes. One idea is to directly learn and predict the RGG with both nodes and edges, e.g., using generative model, conditioning on the workspace and start/goal states. We are cautiously optimistic and believe that this learning-based RGG construction strategy may help improve the performance of motion planners with respect to planning speed and path quality. >Q3: Setting of $T_{max}$. Thanks for the question. In our experiments the value of $T_{max}$ is simply set as the graph size (the number of nodes), and this setting is large enough to find feasible solutions. To improve the efficiency, in practice we also propose an early stopping mechanism, which checks the sums of all the elements of the open list vector $\mathbf{o}$ within each iteration. Here because the binary vector $\mathbf{o}$ marks all the nodes to be explored as ones, once $\mathbf{o}$ sums up to zero, this means there is no more nodes to be explored and hence the search can be early terminated with avoiding inefficiency. We will include the details of this early stopping mechanism in the revised version. Thank you for pointing it out. >Q4: Learning heuristic values. Thank you for the comments. Because it is less challenging to design a good heuristic function in low-dimensional task, A* with admissible heuristic function, e.g., Euclidean distance based, can find very high-quality path (optimal or near-optimal) in 2D maze task. As shown in **Fig. R1(a) of Global Response**, in such scenario the heuristic values generated by manual design is more close to the actual cost than the ones estimated by learning-based method. This explains why vanilla shows slightly lower path cost than A* equipped with NHE in 2D maze tasks. Notice that this slight cost reduction is not free but with more search efforts. As shown in **Table 1 in Supplementary Material**, A* with NHE requires smaller search space than vanilla A* with very similar path cost; therefore in overall NHE still shows very competitive overall planning performance as compared to manual heuristic function design in 2D Maze. For planning in high-dimensional cases, as illustrated in **Fig. R1(b) of Global Response**, learning-based solution provides better heuristic value estimations than manual design. In addition, we have some findings for learning the heuristic values. First, training with diverse RGGs improves the generalization of NHE. Based on the observations that 1) the input RGG to NHE is filtered by a collision checker, indicating the uncertainty of the node degrees, and 2) the graph size may increase during the planning phase because extra batches of the sampled nodes would be added if a path can not be found, we construct diverse RGGs when preparing the training data. More specifically, we randomize both the graph sizes and $K$ values of KNN edge construction to ensure that NHE can be trained on different graph structures. Second, it is important to inform all nodes of the goal information. Considering the heuristic value is to estimate the actual cost to the goal, we initialize the node features of each node by concatenating its own features, the goal features, and the distance to the goal features. Such operation is crucial to learn and predict the heuristic values properly. Third, The number of the loops of the message passing should be properly selected. The iterations of message passing decides how many hops of neighboring information will be seen from each node. It is important for nodes to receive the information of the whole graph so that high-quality heuristic values can be estimated based on the global information. Too few loops are insufficient to receive the global information while too many loops are likely to cause the over-smoothing issue. Therefore, a proper value of the message passing loop should be determined by the graph size and node degrees. In our work, considering the maximum graph size is 1000 and the $K$ value of KNN is 10, we empirically set the value of the message passing loop as 3. >Q5: Format of indicator and all-one vectors. Thank you for pointing it out. We will correct these mathematical formats in the updated version. --- Rebuttal Comment 1.1: Title: Thank you for your updates. Comment: I appreciated the contributions of the authors through this discussion and rebuttal phases. The responses above clarified some concerns and my questions in the reviewing process, and I am convinced that the submitted contribution is technically solid and interesting. As you see, I initially rated the paper as 7 (Accept), and so I hope this paper is included in the conference proceeding. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thanks again for the valuable feedback and recognition of our work! We will be sure to update the manuscript according to your suggestions.
Summary: This paper presents an improved graph searching technique, called GraphMP. The algorithm consists of two modules, the neural collision checker and the neural heuristic estimator, which are utilised by a differentable graph-based A* module for path planning. Strengths: 1. The algorithm is presented in detail, supplemented by toy-case studies shown in Fig.1 and Fig.2. They are illustrative. 2. The structure of Secion 3 is good. Each module is explained. 3. The proposed algorithm is compared to multiple existing algorithms, including non-learning-based algorithms. Weaknesses: 1. I think Section 3.1 needs be further polished. Currently, it only states the problem to be solved. Necessary knowledge about how prior works finished the task, and which parts in the existing framework are to be improved are missing. (I can see some of the information in the supplementary file. But I suggest put them in the main text. ) 2. Table 1 is not illustrative. The algorithms such as BIT* and RRT* have probabilistic completeness, hence the success rate is high if the computational time is sufficiently long. Currently we cannot appreciate the goodness of the proposed algorithm regarding the completeness. I suggest the authors present the set time limit for the experiments in Table 1 (I didn't find it in Section 4.2). Then, as such, the authors should reduce the time limit to its 75% and show the algorithms' performance. Some of the existing algorithms will not have such good results as shown in Table 1 now. 3. Just a minor comments: It looks like in Section 3.1 the problem is stated in the similar form to what RRT* did, but the algorithm is claimed as a searching-based algorithm. This is a little bit strange, because if the RGG is pre-constructed before running the path searching algorithm, then the presentation can safely start from "give a graph, ..". 4. I didn't understand the "lazy node removal" module. Why is it required? The performance of the algorithm without LNR is not explained well in Section 4.3. 5. The meaning of the terms "w/" and "w/o" are not intuitive. I suggest change them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I hope the authors can answer my comments above, and tell me how they will improve the manuscript. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I don't think the paper has a potential negative societal impact. I hope the authors can summarise the limitation and briefly talk about the future of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's very constructive comments and suggestions. The following is our response in order of questions and comments raised. >Q1. Polishing Section 3.1.} Thanks for the valuable comment. Given an RGG $G=(V,E)$, source node $v_s$ and goal node $v_g$, the prior work GNN-Explorer initializes an exploration tree $T$ rooted from $v_s$, and visits edges $e_{free} \in E$ sorted by their predicted priorities. Each visit performs the accurate collision check on the edge and tries to append it into $T$, until $v_g$ is reached. Here the search of GNN-Explorer relies on the predicted edge priorities without fully exploring graph structure and formally consider impact of accumulated path cost during path exploration, thereby hampering the path quality. On the other hand, graph-based A* computes the path from the source node $v_s$ via iteratively selecting the best candidate $v_{sel} = argmin_{v \in O} (g(v) + f(v))$ from the open list $O$. Here, $g(v)$ accumulates the actual cost from $v_s$ to $v$, and $f(v)$ estimates the heuristic value of the cost from $v$ to goal $v_g$. Once $v_{sel}$ is selected, each of its reachable neighboring nodes $v_{nbr} \in \mathcal{N}(v_{sel})$ is checked and updated. By using this way, A* can better explore graph structure and perform cost-aware search, but it requires manual design of high-quality heuristic function $h(v)$, which is a challenging task for high-dimensional tasks. Also, visiting the neighboring nodes can be time-consuming because the accurate collision check must be performed on each edge $e_{v_{sel}, v_{nbr}}$. To overcome these limitations of prior works, GraphMP proposes to use GNN to extract and learn the important patterns of RGG, and then identifies the near-optimal path using learnable graph search component. More specifically, a neural collision checker and a neural heuristic estimator are proposed to extract key graph information from input RGG and provide it to the proposed reformulated differentiable A* module for end-to-end training. Therefore, the path planning is now a graph structure-aware and cost-aware process with powerful graph pattern extraction capability and learnable heuristics function, making it can achieve high planning performance and suitable for both low and high-dimensional tasks. We will update Section 3.1 to include this discussion and analysis. >Q2: The time limit of planning. Thank you for the valuable suggestion. Following the same setting used in Graph-Explorer, in our experiments the time budget is in the format of maximum number of the sampled nodes (as $1000$) for all the planners. By taking your suggestion, we report the success rates of all planners by setting the time limit using wall-clock time (as 400 ms and 300 ms) in the table below. It is seen that under the same wall-clock time budget, GraphMP achieves the highest success rates in almost all the tasks. **Table R2. The success rates when the time limit is 400 ms and 300 ms, respectively. Results for 300 ms are shown in the parenthesis.** | | Maze2 | UR5 | Snake7 | KUKA7 | KUKA13 | KUKA14 | |--------------|-------------|-------------|-------------|-------------|-------------|-------------| | GraphMP | 98.5 (97.1) | 91.1 (88.4) | 99.3 (99.3) | 98.9 (98.6) | 96.1 (95.8) | 97.5 (97.2) | | GNN-Explorer | 98.2 (96.6) | 86.3 (81.7) | 99.2 (99.1) | 98.3 (98.2) | 96.4 (95.9) | 97.2 (96.9) | | RRT* | 61.7 (53.3) | 27.8 (14.8) | 47.5 (28.0) | 80.0 (79.6) | 57.2 (49.9) | 65.3 (58.1) | | BIT* | 90.9 (83.4) | 80.9 (76.1) | 98.9 (98.5) | 97.8 (97.1) | 96.1 (94.3) | 95.1 (94.6) | | LazySP | 92.7 (91.7) | 83.3 (78.5) | 99.3 (99.3) | 98.4 (97.6) | 97.6 (97.1) | 97.4 (96.5) | >Q3: Starting problem statement in Section 3.1. Thanks for point it out. We will follow your suggestion to revise this part. >Q4: Lazy node removal. Thank you for the question. The lazy node removal (LNR) aims to reduce path cost by eliminating redundant nodes along the computed path. Specifically, LNR repeatedly attempts to connect each pair of disconnected nodes from both ends of the path via a collision-free edge. All the intermediate waypoints between the connectable nodes will be removed. **Fig. 4(a)** shows an example. The LNR detects that node 0 and 2 can be connected directly, so node 3 is pruned from the path. Our experiments show that LNR can reduce the average path cost by 21.92\% at most on various types of planning tasks. As shown in Table 4, GraphMP without using LNR can achieves faster planning speed with a bit path cost overhead. So whether to use LNR or not is a design choice depending on the specific performance requirement in the application scenarios (path cost-sensitive or time cost-sensitive). >Q5: "w/" and "w/o". Thanks for pointing it out. We will use the full forms in the revised version. >Q6: Limitation and future work. Thank you for the comments. We discuss the limitation and future work in **Sec. 2 of Supplementary Material**. GraphMP may suffer from the low quality of the raw RGGs due to the uniformly sampled nodes and KNN edges. Such RGG construction strategy neglects the workspace topology and the specific task information, which may waste resources in the exploration of space that is unable to yield optimal paths. Therefore, in our future work, we plan to investigate more efficient graph construction methods, e.g., graph generative model to generate node and edges simultaneously, to construct high-quality RGGs with fewer and only necessary nodes and edges for finding the optimal paths. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I can see most of my questions have been solved. 1. The remaining problem is still about the significance of algorithm. It should be noticed that without the add-on modules (e.g., LNR), the path distance optimality of the proposed algorithm (Table 4, line 1) is not good compared to that of other algorithms (presented in Table 2). This means that the claimed contribution "achieves significant improvement on path quality and ..." in abstract is essentially given by the add-ons, not the graph-based structure itself. In other words, if I add a smoother to BIT*, the resultant path length of BIT* will also improve. 2. The other issue I want to mention after reading other reviewers' comments is that, there should be a formal discussion about the completeness (is the proposed algorithm complete?) and optimality (here it means the optimality without add-ons, because the authors should know that in a simply-connected environment ALL path planner algorithms, after equipped with a shortcut optimiser, are optimal). P.S. Don't make me wrong, I don't mean the algorithm must be both complete and optimal. I don't want to distress the authors. --- Reply to Comment 1.1.1: Title: Rebuttal by Authors Comment: >Q7: Planning performance with and without LNR. Thank you for the valuable comments. We believe we cannot directly conclude that GraphMP without using LNR has inferior path quality than other algorithms such as BIT*, via comparing results in Table 2 and Table 4. This is because the results in Table 2 and Table 4 are based on the setting of the same ''maximum number of the sampled nodes (1000)" for all planners (we follow this setting used in GNN-Explorer paper (NeurIPS'21)), instead of ''the same planning wall-clock runtime budget". Consider 1) GraphMP has higher planning speed than others (shown in Table 3) and 2) planning time can impact the path quality, e.g., path cost of BIT* can be reduced with more planning time; therefore, when evaluating GraphMP and BIT* under the same runtime budget instead of same sampled nodes budget, GraphMP, with and without LNR, will show better path quality performance in this setting. As shown in the following Table R3 and Table R4, with 200ms planning time budget, GraphMP without using LNR provides lower path cost than LNR-free BIT* in most tasks, meanwhile achieving much higher success rates. We also evaluate the path costs when both GraphMP and BIT* are equipped with LNR modules. As shown in the following Table R5, using LNR improves the path quality of both planners, and GraphMP shows better path quality performance than BIT* in most tasks when the same 200ms planning time limit is set. **Table R3. The mean success rates (\%) when the time limit is 200ms.** | | Maze2 | UR5 | Snake7 | KUKA7 | KUKA13 | KUKA14 | |:-------------------:|:-----:|:----:|:------:|:-----:|:------:|:------:| | GraphMP without LNR | 95.2 | 84.7 | 99.1 | 98.5 | 95.4 | 97.1 | | BIT* without LNR | 73.9 | 71.7 | 96.5 | 96.4 | 90.3 | 91.9 | **Table R4. The mean path cost when the time limit is 200ms.** | | Maze2 | UR5 | Snake7 | KUKA7 | KUKA13 | KUKA14 | |:-------------------:|:-----:|:----:|:------:|:-----:|:------:|:------:| | GraphMP without LNR | 2.37 | 6.85 | 4.97 | 6.71 | 10.26 | 9.93 | | BIT* without LNR | 2.13 | 7.69 | 5.41 | 6.94 | 10.53 | 10.64 | **Table R5. The mean path cost when applying LNR to both GraphMP and BIT\*. The time limit is 200ms.** | | Maze2 | UR5 | Snake7 | KUKA7 | KUKA13 | KUKA14 | |:----------------:|:-----:|:----:|:------:|:-----:|:------:|:------:| | GraphMP with LNR | 2.25 | 6.7 | 4.81 | 6.35 | 10.04 | 9.71 | | BIT* with LNR | 2.12 | 7.35 | 5.26 | 6.64 | 10.3 | 10.35 |
Summary: The paper extends the previous GNN-Explorer work and replaces its greedy search strategy with A* search using a neural heuristic. Furthermore, it utilizes the neural collision checker and lazy node removal to improve the success rate and path cost. The result is evaluated based on a benchmark from 2D maze to 14D dual arms and shows that it outperforms the baselines. Strengths: 1. The approach is clear and easy to understand. 2. The extensive results on 6 environments seem thorough and show the soundness of the approach. Especially for UR5, the planning time is fast. 3. The ablation studies show the effectiveness of each key component. Weaknesses: 1. The novelty is incremental. All the key components can be found in previous works, including graph-based search (from GNN-Explorer), neural heuristic and differential training (from neural A*), neural collision checker (from Fastron and ClearanceNet), and lazy node removal (from Motion Planning Networks). However, combining them and showing validness is also important, therefore, I recommend borderline acceptance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I wonder how sensitive the algorithm is to the hyperparameter θ. As claimed in the paper, this parameter balances the trade-off between the planning efficiency and the path safety. So could you please give an ablation study on how robust the algorithm is to the choice of θ? 2. Do you mind providing the TP, TF, NP, NF scores for Figure 5 (Left)? I would love to know more about whether this neural collision checker is overestimating or underestimating the collision risk. 3. During training, does the NHE take the graph input that is predicted and filtered by the NCC, or does it completely take a collision-free RGG? If it is the later one, then there might be some distribution shift happening during inference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please discuss the limitations, since I do not see the texts mentioning the limitations. An obvious limitation is that the algorithm is not complete. For example, it is possible that the neural network predicts an edge to be collision-free with very high confidence (say 90%), and the algorithm could find a path that is actually in collision because of it. I do not see any noticeable negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's very constructive comments and suggestions. The following is our response in order of questions and comments raised. >Q1: Technical novelty. Thanks for the valuable comments. As we state in the Related Work Section, neural network-enabled motion planning is an active research field in recent years, and different components of planners have been studied from learning-based perspective in the literature. Compared with the existing efforts, GraphMP has several technical contributions. First, as analyzed in Introduction section, GNN-Explorer essentially uses GNN to identify edge priority to grow the exploration tree. Due to its tree exploration-based planning strategy, the mechanism of GNN-Explorer mainly focuses on finding edge priority (a type of "hard" information). Instead, GraphMP does not use tree-based exploration and the GNN in GraphMP serves to learn the graph information to estimate heuristic values. Therefore, GraphMP fully explores graph structure and properly considers the impact of critical accumulated cost (a type of "soft" information) in the planning phase, providing better planning performance (success rate, path cost and planning speed) than GNN-Explorer. Second, as mentioned in Section 3.5, Neural A* is a work focusing 2D planning task. More specifically, it estimates the cost from the source to all nodes instead of learning heuristic values of each node. Also, it uses convolutional layers to process the input image information, limiting to the grid-based 2D planning task. On the other hand, the GNN-based model structure and the differentiable training of GraphMP is designed for graph input and can be applied to planning tasks of any dimensions. Third, both Fastron and ClearanceNet are designed for predicting whether a single node (robot state/configuration) is in the obstacle space or not. To check the collision status of the edge (robot movement), these two methods need to discrete the edge to a set of points and then examine each point. Instead, GraphMP uses neural collision checker (NCC) to learn the collision status of edges, therefore it can directly check the clearance of all edges in the graph without discretizing edges into points. Also, we agree that removing redundant nodes/states to improve path quality is a common post-processing practice in the motion planning literature, and the shortcutting methods in our work and MPNet can be viewed as the simplified version of the path smoothing with shortcutting heuristic. This operation makes trade-off between path quality and planning time and can be optionally used if required. We will add the related path smoothing literature and discussion in the updated version. >Q2: Sensitiveness of $\theta$. Thanks for pointing this out. Ablation study on the impacts of $\theta$ on planning time and success rate are reported in **Fig. 2 of Supplementary Material**. It show that larger $\theta$ brings higher time cost due to the increasing demand of accurate collision checks. On the other hand, the success rate first increases with larger $\theta$, since smaller $\theta$ causes more aggressive approximated collision check, potentially bringing collision-existed solution. Considering when $\theta$ is approaching 80%, the success rate becomes steady across different environments. therefore we adopt $\theta$=80% in our experiments to make good balance between planning speed and success rate. These figures are also reported in **Fig. R3 of Global Response**. >Q3: TP/FP/TN/FN scores of NCC. Thanks for the suggestions. The following table R1 reports TPR/FPR/TNR/FNR of NCC for Fig. 5. It is seen that NCC has sufficiently large TPR and TNR and small FPR and FNR, demonstrating good prediction capability. Based on this well-performed NCC, the success rate of GraphMP is 99%-100% across different tasks (reported in **Table 1 of main paper**). **Table R1. TPR, TNR, FPR, FNR of NCC.** | | Maze2 | UR5 | Snake7 | KUKA7 | KUKA13 | KUKA14 | |-----|-------|-------|--------|-------|--------|--------| | TPR | 98.84 | 93.15 | 96.41 | 94.29 | 93.73 | 91.86 | | TNR | 98.93 | 93.17 | 96.35 | 94.37 | 93.77 | 91.92 | | FPR | 1.07 | 6.83 | 3.65 | 5.63 | 6.23 | 8.14 | | FNR | 1.16 | 6.85 | 3.59 | 5.71 | 6.27 | 8.08 | >Q4: Input graph for NHE training. Thank you for this valuable question. In the training process the input of NHE is the completely collision-free RGGs instead of predicted RGGs from NCC. The reason why we make this design choice is that the predicted RGGs generated by NCC can potentially contain collided edges, and hence using this noisy training data may affect the performance of NHE. Instead, training NHE using completely collision-free RGGs can better help NHE to learn a more suitable heuristics function. As shown in the experimental results, GraphMP can achieve strong planning performance with this input difference for NHE between inference and training phases. >Q5: Potential Limitation, e.g., final found path may not be collision-free? Thank you for the comments. In **Sec. 2 of Supplementary Material**, we discuss the limitation of our approach and potential research direction. More specifically, the construction of raw input RGG may be inefficient because of the uniform node sampling and KNN-based edge connection. Without fully leveraging the specific prior spatial information, such straightforward RGG building method is involved with many non-necessary nodes sampling and edge construction, affecting construction efficiency. For the legality of final found path, GraphMP guarantees that the final path, if found, is collision free. As described in **Algorithm 4 of Supplementary Material**, accurate collision check will be performed on the final computed path. Based on this mechanism and the good prediction performance of NCC, the success rate of GraphMP for finding collision-free paths achieves 99%-100% across different tasks. --- Rebuttal Comment 1.1: Title: Thanks for partially addressing the issues Comment: My main concern is Q5. See the corresponding paragraph as below. Thanks for explaining the technical difference in Q1. Now I understand better about how you integrate and modify the previous works as the whole framework. The result from Q2 is reasonable and expected, and thanks for conducting the experiment. One thing you can do in the revision is to add the ratio of the number of edges that is above theta to the whole number of edges, with regard to theta. No need to do it now in the rebuttal, since this is not my main concern. Thanks for providing Q3 result, which shows the NCC is a balanced checker. This is good and addresses my concern. Your answer to Q4 is a little tricky, since I’m asking whether making it as a in-distribution graph will make the current result better (definitely the result from the current setting is already good). This question mainly serves as a suggestion for improvement, it would not affect my criteria for acceptance. Q5 is still a severe problem. I still strongly suggest to incorporate the discussion for incompleteness in the limitation section. The reason that NeurIPS provides this section is to describe the drawback your own approach more thoroughly and objectively. There is just no way to achieve the completeness if the NCC is used, and all you need to do is just to claim it in the limitation section. I would not suggest to reject if you claim such a drawback. However, it would be a severe problem if you realize such a problem (as the discussion to Dhw9) and choose to ignore it in the limitation section. --- Reply to Comment 1.1.1: Title: Thank you for your feedback and suggestions Comment: Thank you very much for reading our responses and providing very constructive feedback. We are happy that we have addressed some of your concerns. Next we describe our response to the remaining concerns. >For Q2 Thank you for the valuable suggestion! We will follow your comments and add the figure showing the ratio of edges with respect to different intervals of prediction confidence in the revised manuscript. >For Q4 Thank you very much for the valuable suggestion, and pointing out the option of training NHE using the output of NCC. As we described in the previous response, the reason why we use collision-free RGG is to avoid the impact of training data containing collided edges. On the other hand, as you suggest, using predicted RGG in both training and inference can potentially further improve the performance because of the benefit of in-distribution. This is really a very good suggestion to remind us to explore different data preparation strategy, and we will follow your suggestion. Due to the limited time in the author-reviewer discussion phase, we are not able to provide the update of training all models using RGG generated by NCC at the current moment. We will keep working on this and add the experimental results using the predicted RGG for training in the updated manuscript. >For Q5: Thank you for the very constructive comments! We completely agree that the limitations of a work should be described more thoroughly and objectively. In our previous response to you, we explained that GraphMP will not generate the final path containing collided edges because of performing the legality check before generating the final output, in other words, "GraphMP will not produce the collision-included paths". Now we understand that you are referring to the completeness problem, in other words, "GraphMP cannot theoretically guarantee to find collision-free paths asymptotically." We really appreciate you and Reviewer Dhw9 for pointing out this limitation, as well as the optimality problem, which we did not realize in the original submission. Following the valuable suggestion from you and Reviewer DhW9, we will revise and expand the existing Limitation Section of the paper to incorporate the discussion for incompleteness and non-existence of optimality. A draft version of the updated Limitation Section is prepared as follows. **Limitation of This Work** Despite its good empirical performance across different tasks, GraphMP still has some limitations as follows. First, it does not provide probabilistic completeness when $\theta$<100%. That means, if the collision status of some edges is determined by the neural collision checker (NCC), even if the prediction accuracy of NCC is high and a collision-free path exists in the input RGG, GraphMP still cannot guarantee to find the feasible solution asymptotically. Notice that though the probabilistic completeness can be achieved when setting $\theta$=100% (proof will be included in another section of the supplementary material), the planning time will accordingly increase due to the extra costs incurred by performing accurate collision check on all the explored edges. Second, it does not offer asymptotical optimality. GraphMP performs the graph search on an implicit RGG which is incrementally expanded with more batches of nodes. Once a path is found, GraphMP validates its legality and returns the solution. Because 1) this mechanism naturally leads that the quality of the sampled RGGs has a heavy impact on the path cost -- the waypoints along the paths are restricted to be a subset of the existing nodes of RGG, but RGG itself cannot be guaranteed to contain the optimal path; and 2) the search process will be terminated once the path is found, without further seeking better solutions, GraphMP cannot theoretically guarantee to find the optimal path asymptotically. Third, its efficiency is still limited by inefficient RGG construction. More specifically, 1) the uniform sampling of nodes disregards the environmental topology, causing some unnecessary node exploration; and 2) the construction of raw edges is also involved with unnecessary edge generation, thereby limiting the further runtime speedup provided by the proposed approach. Again, we appreciate Reviewer KcNw very much for the very constructive suggestions and comments.
Summary: This paper presents GraphMP, a neural motion planner that uses GNNs and graph search techniques to do motion planning in various scenarios. GraphMP has two components: a neural collision checker that estimates the collision status of edges in a randomly sampled graph, and a neural heuristic estimator that assigns heuristic values to nodes for directing the graph search. This paper also develops a graph-based A* algorithm that allows end-to-end learning of the neural heuristic estimator. Thi paper evaluates GraphMP on six planning tasks with dimensions from 2D to 14D and demonstrates that it performs better than several classical and learning-based planners in terms of path quality, planning speed and success rate. Strengths: - This paper provides extensive experiments and comparisons with baselines on various environments and metrics. It shows strong experimental results in many planning settings. - The paper leverages the advantages of both GNNs and graph search algorithms to improve performance over existing methods like GNN-Explorer. - Effective techniques like in-search differentiable graph-based A*, collision check and lazy nod removal are added to improve the performance of GraphMP. Weaknesses: - This paper does not study or show how different factors and design choices affect the performance of the proposed GraphMP algorithm, including - number of neighbors (K) in K-NN (was set to 10 and 20 in different experiments) - the graph size - the in-search collision check threshold - This paper does not mention the challenges or drawbacks of GraphMP, such as how it deals with dynamic or uncertain settings, or how it adapts to more complicated tasks. - It will be helpful to provide more information about how the neural heuristic estimator and the neural collision checker are trained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For A* to work provably, how to ensure that the neural heuristic estimator produces admissible or consistent heuristic values? - How to handle non-reachable goal states? - Can a discussion about different design choices in the algorithm be included? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please include a limitation section Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's very constructive comments and suggestions. The following is our response in order of questions and comments raised. >Q1: Impact of K, graph size and $\theta$. Thank you for pointing it out. **Fig. 2-4 in Sec. 6 (Ablation Study) of Supplementary Material** report the impacts of these design choices on performance. Detailed analysis is also discussed in that section. More specifically, for **varying threshold $\theta$** of in-search collision check, evaluation results show that time cost keeps increasing with a greater $\theta$, because more accurate collision check is performed during the graph search; and success rate increases first and then becomes steady when approaching $80\%$. Therefore, we set $\theta$ as $80\%$ to achieve good balance between planning speed and success rate. For **varying $K$ value of KNN**, we report the mean path cost, time cost, and success rate with a fixed batch size of node sampling as $100$. It is seen that the path cost and time cost first decreases and then become stable as $K$ increases. This is because denser graph edges are more likely to compose a shorter path and more connections can improve the exploration efficiency. Meanwhile, success rate first increases and then become stables as $K$ approaches 10. So in our experiments $K$ is set as 10. Note that $K=20$ in Fig. 5 of main paper is only used to show the good performance of NCC, we have updated this figure in **Fig. R2 of Global Response**. For **varying graph size**, we evaluate the performance of GraphMP with respect to different number of nodes per sampling, which corresponds to the incremental graph size. Here the max budget of sampled nodes (maximum graph size) is set as $1000$. It is seen that the time cost becomes the lowest when the number of nodes equals $100$ and then grows with more nodes. This is because too few nodes incur the extra sampling batch, while too many nodes are more than necessary to compute a valid solution. The success rate and path cost achieve the best result when the sampling size is $100$. These figures are also reported in **Global Response (Fig. R3-R5)**. >Q2: Potential limitation, e.g., cannot work in dynamic settings? Thanks for the valuable comments. The limitation of our approach is analyzed in **Sec. 2 of Supplementary Material**. GraphMP may suffer from inefficient construction (uniform node sampling and KNN-based edge connection) of raw input RGG. Without leveraging workspace topology and spatial relationship between the start/goal states, the raw RGG requires more than necessary nodes and edges to cover the configuration space. Our future work will explore how to directly learn the biased node sampling and construct necessary edges. We believe it is feasible to extend GraphMP for dynamic changing environment. Considering the obstacle's movements are typically predictable, instead of predicting a single-value collision probability for each edge, NCC can learn to predict a vector of probabilities where each entry represents the probability at the corresponding time window. The A* search then looks up the edge probabilities at each timestamp, and plans over an updated collision-free RGG without heavy collision checks, making GraphMP adaptive for dynamic planning. >Q3: More information on training NHE and NCC. Thanks for the suggestion. The training data consists of 2000 different workspaces for both NCC and NHE. For each workspace, we randomly construct 20 RGGs by sampling a random number of nodes ([100, 200, 300, 400]) and a random $K$ value of KNN ([5, 10, 15, 20]). Adam optimizer is used for both training of NCC and NHE, and the learning rate, training epoch and batch size are set as $1e^{-3}$, $400$ and $8$, respectively. **For NCC training**, given the input data as the raw RGG and the ground-truth output as the collision status of all edges, NCC predicts the probabilities of being collision-free for all edges in parallelism, and the binary cross-entropy loss is adopted. **For NHE training**, as shown in **Algorithm 1**, given a collision-free RGG $G = (V, E_{free})$ and start/goal nodes ($v_s$, $v_g$), the ground-truth output is a length-$|V|$ binary vector ($\hat{\mathbf{c}}$), denoting the optimal path computed by Dijkstra. NHE predicts a length-$|V|$ vector $\mathbf{b_{heu}}$ where each entry denotes the heuristic value of the corresponding node. Differentiable A* performs A* search to calcualte $\mathbf{c}$, a length-$|V|$ binary vector that is repeatedly updated by marking the explored nodes as ones. The difference (L1 loss) between $\mathbf{c}$ and $\hat{\mathbf{c}}$ is measured as training loss. >Q4: Admissibility/consistency of NHE. Thank you for valuable comments. Empirical evaluations show that NHE exhibits admissibility and consistency. Denote $f(v_i)$ be the heuristic value of node $v_i$ and $c(v_i, v_j)$ be the actual cost between $v_i$ and $v_j$. As shown in the left figures (heatmaps) from **Fig. R1(a)(b) of Global Response**, $|f(v_i) - f(v_j)| - c(v_i, v_j)$ for each edge $e_{ij} \in E$ is non-positive for most cases, indicating good consistency for NHE. The right figures in Fig. R1(a)(b) also show that $f(v_i)$ estimated by NHE is constantly smaller than actual cost $c(v_i, v_g)$ ($v_g$ is goal node), indicating good admissibility of NHE. >Q5: How to handle non-reachable goal states? Thanks for pointing this out. GraphMP uses batched sampling strategy for the case when goal is not reached. As shown in **Algorithm 4 of Supplementary Material**, GraphMP first constructs a raw RGG using a single batch of nodes and searches for the collision-free path. If the path is not found, i.e., the goal state is not reached, another batch of nodes are sampled and appended to the graph. The above procedure is repeated until the collision-free path is found or the maximum budget of samples is reached. Accurate collision check is performed on the final computed path to verify the legality. --- Rebuttal Comment 1.1: Title: thanks for the rebuttal Comment: I would like to thank the authors for the rebuttal and the additional experiments! Nice work! I have update my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thanks a lot for the constructive comments and positive feedback!
Rebuttal 1: Rebuttal: **We would like to thank all reviewers for the valuable comments and suggestions.** In the attached PDF file, we include five figures as described as follows: the study on the admissibility/consistency of the neural heuristic estimator **(Fig. R1)**, the updated neural collision checker results on RGG when $K = 10$ **(Fig. R2)**. Besides, for the reviewer's convenience, we also put the ablation study on varying $\theta$ **(Fig. R3)**, varying sampling size **(Fig. R4)** and varying $K$ **(Fig. R5)**. Pdf: /pdf/717b14b09b64970b4f24613c7ea1fe113885698a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
Accept (poster)
Summary: This paper studies the statistical nature of distributionally robust reinforcement learning under the generative model. Specifically, it studies two divergences, total variation or chi-square divergence, over the full range of uncertainty levels. The paper improves the upper and lower bounds, especially when the uncertainty levels are small. Based on these results, the paper partially answers the question of whether distributionally robust RL is harder to learn compared to the non-robust counterpart. Strengths: The paper's theoretical results are both interesting and important in understanding the statistical limits of distributionally robust RL. The paper gives a complete story in the TV divergence and chi-square divergence. Weaknesses: A brief explanation of the reason for the improvements in both the upper and lower bounds, compared to the previous technique, would significantly help readers better understand the technical contributions of this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can the authors briefly explain the current challenges in giving parallel results under the KL divergence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our thanks to the reviewer for their meticulous review and perceptive insights regarding future directions. It is gratifying to know that the reviewer found our results both interesting and important. In what follows, we provide our response to the reviewer's comments. ### 1. Adding discussion on the technical contribution for improving both the upper and lower bounds. Thanks for this valuable suggestion. We shall highlight the following discussion in the introduction. * **Technical contribution to improving upper bound.** Achieving the tighter upper bound for TV and $\chi^2$ uncertainty set requires different technical tools. For TV uncertainty set, the key part that pins down the sample complexity to be better than standard RL is a carefully derived tighter bound for the range of the robust value function (i.e., Lemma 7). For $\chi^2$ uncertainty set, we improve the existing sample complexity with a quadratic dependency on the state space $S$ to a linear one through 1) the property of the dual equivalence in Lemma 5; 2) the leave-one-out technique to decouple statistical dependency inspired by [3][4]. * **Technical contribution to improving lower bound.** To achieve a much tighter lower bound compared to prior work [1], we construct new hard instances (RMDPs) illustrated in Figure 2(a) of the uploaded **PDF**, which are different from the ones that are usually used in standard RL [2] and prior art [1] of robust RL. The new instances are inspired by the asymmetric structure of RMDPs induced by the additional infimum operator in the robust value function, which are tailored based on the uncertainty level $\sigma$. Please refer to Appendix D and F for more details. ### 2. The challenges of extending to the KL divergence case. Thanks for raising this insightful question. Extending the current results to KL divergence is definitely of great interest. For the lower bound, it is promising to use the hard instance construction approach in this work to improve the lower bound of the existing work [5] about KL divergence, while the main challenge is that the calculation of KL case is much more complicated since it is more non-linear than both TV and $\chi^2$ and thus require careful design. To improve the upper bound in KL divergence case, we may need to carefully control the bound for the dual parameter --- which is now the bottleneck to achieve minimax-optimal sample complexity with respect to the effective horizon length $\frac{1}{1-\gamma}$. In summary, extending the results to KL divergence is promising but may need additional specific techniques. [1] Yang, Wenhao, Liangyu Zhang, and Zhihua Zhang. "Toward theoretical understandings of robust Markov decision processes: Sample complexity and asymptotics." The Annals of Statistics 50.6 (2022): 3223-3248. [2] Gheshlaghi Azar, Mohammad, Rémi Munos, and Hilbert J. Kappen. "Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model." Machine learning 91 (2013): 325-349. [3] Agarwal, Alekh, Sham Kakade, and Lin F. Yang. "Model-based reinforcement learning with a generative model is minimax optimal." Conference on Learning Theory. PMLR, 2020. [4] Li, Gen, et al. "Breaking the sample size barrier in model-based reinforcement learning with a generative model." Advances in neural information processing systems 33 (2020): 12861-12872. [5] Shi, Laixi, and Yuejie Chi. "Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity." arXiv preprint arXiv:2208.05767 (2022). --- Rebuttal Comment 1.1: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors
Summary: 1. This paper studies the model robustness in RL via the framework of distributionally robust MDP. 2. The authors derive the sample complexity of RMDPs using a model-based algorithm called distributionally robust value iteration when the uncertainty set is measured via either total variation or chi square divergence. 3. The lower bounds are developed to benchmark its tightness 4. A interesting insight from this paper is that the required sample complexity of RMDPs can be higher or lower than the standard MDP based on the choice of uncertainty measurement. Strengths: 1. Nolvety: the improvement of the upper and lower bounds over the existing work is not easy, new techniques are developed. 2. Presentation is very clear, especially the summarizations in Tables 1-2 and the illustrations in Figure 1 3. The insights from theoretical results are interesting: RMDPs can be harder than standard MDPs under the chi square distance but can be easier than standard MDPs under the TV distance. Weaknesses: 1. It is better to add experiments to empirically demonstrate the theoretical results in the paper. 2. It is better to add a discussion on how the theoretical results in this paper could help the practitioners. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What is the intuition / theoretical explanation that whether RMDPs are not harder nor easier than standard MDPs highly depends on both the size and shape of the uncertainty set? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for careful review and insightful feedback. It is rewarding to know that the reviewer recognizes the significance of our contributions --- the new techniques and interesting takeaway message. In what follows, we provide our response to the reviewer's comments. ### 1. Experimentally demonstrate the theoretical results Thanks for the insightful suggestion. We have conducted new numerical experiments to demonstrate and verify the theoretical findings as the reviewer suggested. Please refer to **General response** and uploaded **PDF** for details. ### 2. Adding discussion of how the results can help practitioners Thanks for the valuable suggestion. We shall add the discussion in the main text: The findings motivate the practitioners to: * **Design robust RL algorithms since robustness may be a free lunch.** The results show that promoting additional robustness in RL algorithms does not necessarily harder than standard RL in terms of sample requirement. So we can take this free lunch to have robustness property for RL algorithms. * **Design the uncertainty set carefully.** The results show that the statistical difficulty of robust RL heavily depends on the shape and size of the prescribed uncertainty set. So we need to carefully design the uncertainty set: 1) shape: the results for TV uncertainty show that using simple or linear divergence function such as $\ell_p$ norm (including TV) may be easier than other divergence functions and leads to less sample requirement; 2) size: the results for $\chi^2$ uncertainty indicate that we should not choose too large uncertainty set, which both induce over-conservative algorithm and sample inefficiency (seen from the exploded sample requirement as $\sigma$ increasing). ### 3. Explanation of why RMDPs are not necessarily harder nor easier than standard MDPs --- depends on both the shape and size of the uncertainty set. Thanks for raising this insightful question. We would be happy to provide more explanation. The difficulty of solving standard RL or robust RL is mainly determined by the following error terms. Given the same number of samples (same $\widehat{P}^0$), a smaller error term means the task is easier. $\text{Standard RL:} \quad \quad \delta\_{\text{RL}} = \underset{ {\color{blue}{\bf \text{ linear}}} \text{ w.r.t. } P^0 - \widehat{P}^0 }{\underbrace{\Big| P^0\widehat{V} -\widehat{P}^0\widehat{V} \Big|}}$ $\text{Robust RL:} \quad \delta\_{\text{robust RL}}= \underset{ {\color{red}{\bf \text{complex form} }} \text{ w.r.t. } P^0 - \widehat{P}^0 \text{ due to inner problem over uncertainty set } \mathcal{U}^\sigma\_\rho(\cdot)}{\underbrace{ \Big|\inf\_{\mathcal{P}\in \mathcal{U}^\sigma\_\rho\left(P^0 \right)} \mathcal{P}\widehat{V}\_{\text{rob}}- \inf\_{\mathcal{P} \in \mathcal{U}^\sigma\_\rho\left(\widehat{P}^0 \right)} \mathcal{P}\widehat{V}\_{\text{rob}} \Big|}}$ The error terms mainly depend on two factors: 1) the relationship w.r.t. the model estimate error $P^0 - \widehat{P}^0$; 2) the range of the value function $\widehat{V}$ or $\widehat{V}_{\text{rob}}$. **Briefly, both the shape and size of the uncertainty set influence the error term and thus determine the difficulty of robust RL problems with certain sample size:** 1) using different uncertainty shapes will lead to either simple relationship (TV) or complex relationship ($\chi^2$) w.r.t $P^0 - \widehat{P}^0$; 2)different size determined by $\sigma$ will lead to different range of robust value function $\widehat{V}_{\text{rob}}$. More specifically, we show how the shape and size determine the statistical difficulty of robust RL for TV or $\chi^2$ uncertainty set, respectively: * **Using TV uncertainty set: easier than standard RL.** In this case, the error term is shown to be $\delta\_{\text{robust RL}} = \Big| P^0\widehat{V}\_{\text{rob}} -\widehat{P}^0\widehat{V}\_{\text{rob}} \Big|$ (determined by the TV uncertainty shape) that is also linear w.r.t. $P^0 - \widehat{P}^0$ --- the same as standard RL. While the range of robust value function $\widehat{V}\_{\text{rob}}$ in robust RL can reduce rapidly (as the size $\sigma$ increases) and becomes smaller than the range of $\widehat{V}$ in standard RL, since the values in all states are pushed toward the minimum one and become close to each other. As a result, the error term of robust RL $\delta\_{\text{robust RL}}$ is equal to or smaller than standard RL $\delta_{\text{RL}}$ when trained with the same size of samples, i.e., robust RL becomes easier than standard RL. * **Using $\chi^2$ uncertainty set: can be harder than standard RL.** In this case, the error term of robust RL $\delta_{\text{robust RL}}$ is non-linear w.r.t. and sensitive to model estimate error $P^0 - \widehat{P}^0$ (determined by the $\chi^2$ uncertainty shape), which leads to a large error term even if $P^0 - \widehat{P}^0$ is small, especially when $\sigma$ is large (e.g., $\sigma > O(\frac{1}{1-\gamma})$). So when using the same size of samples --- same $P^0 - \widehat{P}^0$, standard RL has an error term that is linear to $P^0 - \widehat{P}^0$, while the error term of robust RL may explode, namely, robust RL becomes much harder than standard RL. --- Rebuttal Comment 1.1: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. We add new experiments inspired by your insightful suggestions, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors --- Rebuttal Comment 1.2: Title: Kindly reminder of the discussion ending Comment: Dear reviewer, Thank you once again for supporting this work and providing feedback on our paper. As the discussion period will end within the next day, we kindly ask that you review our responses to ensure that we have addressed all of your concerns. Best, Authors
Summary: The research primarily focuses on the robust Markov Decision Process setting, where the objective is to learn a robust policy with the uncertainty set being measured by f-divergences using a model-based algorithm. One of the key idea of the paper lies in deriving precise sample complexity bounds for the same with access to the generative model and shows that RMDPs is not necessarily more difficult than standard MDPs, which is an important finding in general. Strengths: One of the primary challenges in the field of Robust MDPs in general lies in the selection of the uncertainty set and shape and size of the uncertainty set, and the past literature has assumed certain structures around the same specifically (s,a) rectangularity to solve the robust MDP problem. The paper provides a detailed study of the effect of uncertainty set on the robust MDP and the sample complexity analysis. A primary contribution of the paper lies in reducing the significant gap between the upper and lower bounds of past literature with different f-divergences. A standard formulation in robust MDP deals with defining the uncertainty-set $U_{\rho}^{\sigma}(P^0)$ with a divergence $\rho$ and radius $\sigma$ around the nominal transition kernel following the (s,a) rectangularity condition and solving the robust Value iteration. Although the robust value iteration is hard to solve in general, the dual of the problem is reasonably simple and can be applied due to the strong duality condition. With this, the authors develop distributionally robust value iteration and improved the upper bounds for RMDPs with various f-divergences. The Theorems also highlight the sample complexity over the entire range of the uncertainty level and how the geometry of the uncertainty set can affect the sample complexity analysis of RMDPs. Overall the paper is clearly written, with detailed analysis, and also provides the theory and proofs of earlier results as required, which I found very helpful. Weaknesses: 1. In the statement of Theorem 1, where the relation of RMDP's sample complexity with the standard MDP is discussed, it states when $\sigma \leq 1 -\gamma$, the sample complexity is almost similar to MDP which makes sense since for smaller $\sigma$ means eventually we are optimizing just over the nominal $P^0$. However, the other result shows that when $\sigma > 1- \gamma$, the sample complexity of RMDPs becomes smaller than MDP, which is not very intuitive. When the uncertainty set over which the robust optimization is performed, when we increase the size of the set ideally the complexity should increase. Hence, it is not very clear what is the intuition of this result and is not explicitly stated here. Why the shape of the uncertainty set with TV can produce a better complexity analysis is not absolutely clear and intuitive. 2. The authors propose a distributionally robust value iteration algorithm (Algorithm 1) which is resulting in an improved analysis. However, the novelty of the proposed algorithm is not very clear as robust value iteration and solving the robust problem with Wasserstein divergence have been there long before. It will be critical to specifically highlight the novel aspects of the algorithm and which component is causing the improvement in the analysis. 3. The papers lack any experimental study to verify the theoretical claims for the proposed algorithm. It will be helpful if the authors can provide simple experimental evaluations or ablations on some vanilla RL environments, and how the proposed algorithm is achieving the improvements over prior in terms of sample complexity. That will make the intuition of the improvement in the algorithm very clear. 4. It is mentioned the distribution shift issue for RMDPs when the transition kernel drawn from the uncertainty set can be different from the nominal kernel. However, it's not clear whether it is addressed by the current algorithm and how? Also, won't this issue be increased when the size of the uncertainty set is large requiring more samples, which seems opposite to what it is shown in Theorem 1? It also needs further clarity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Theorem 1, when $\sigma > 1- \gamma$, the sample complexity of RMDPs becomes smaller than MDP. However, when we increase the size of the uncertainty set, indicates we are more unsure of the transition and can come from a larger radius around the nominal. Shouldn't we need more samples in this case, why is considering the total variation causing the reverse? This is critical and will be helpful to have a detailed discussion. 2. The author compares the sample-complexity results of RMDPs with standard MDP which is interesting and important. However, the result(Th 1) holds for $\epsilon \in [0, \frac{k_0}{1 - \gamma}$. However, in Agarwal et.al, they also show the sample complexity is $O(S^2 A)$ when $\epsilon^2 \geq \frac{1}{k. (1-\gamma)^3}$ i.e. in regimes where no meaningfully accurate approximation to the actual transition probabilities can be constructed. It will be in 3. The authors have mentioned the geometry of the uncertainty set in connection to $\sigma$, but how is the geometry defined and what the geometry of the set refers to is not explicitly mentioned. What exactly does the geometry of the set refer to? Does it just refer to the shape owing to the type of divergence and size corresponding to the radius? Or there are further detailed connections drawn to the geometry? 4. Although the research closes the gap b/w the upper and lower bounds with more precise bounds, the generative model assumption, in general, is restrictive, and one of the major hardness for model-based RL is to approximate the transition model which is hard. #Minor Comment In several places, the citation and references were not coming properly, especially in the Appendix (Lines 663, 689, 692, 701, 800, 801, 1079), which makes it a little unsmooth to follow. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some of the future and potential research directions are mentioned in the discussion, which I feel is sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the comprehensive feedback and recognize the significance of our contributions. ### 1. The intuition of why robust RL with TV uncertainty is easier than standard RL. The intuition is kind of the opposite --- as the uncertainty level $\sigma$ increases, fewer samples are needed to achieve a desired policy with a certain accuracy. This intuition is not limited to TV, but also any $\ell_p$ norm, even other uncertainty divergence functions. Specifically, recall the goal of robust RL is to find a policy $\pi$ whose performance is close to the robust optimal policy, i.e., $V^{\star,\sigma}(s) - V^{\pi,\sigma}(s) \leq \varepsilon$. As the uncertainty level increases, the performance gap between the optimal policy and any policy decreases rapidly, given that no policy can significantly outperform others when the environment may vary substantially. Consequently, only a small number of samples are required to reach the desired accuracy since the potential for improvement is minimal. ### 2. Clarification of the algorithm novelty in Section 3. * **We focus on statistical understanding of existing methods.** We didn't propose a new algorithm. Algorithm 1 in Section 3 is a well-known algorithm [1] for robust RL that we show here to keep this paper self-consistent. We highlight that this work focuses on studying the inherent difficulty of robust MDPs in terms of sample cost to provide a solid foundation for algorithm design. So we take Algorithm 1 as an example. Our statistical results can work for any model-based algorithms obeying certain conditions (can achieve a policy $\widehat{\pi}$ obeying $\|\widehat{V}^{\star, \sigma} - \widehat{V}^{\widehat{\pi}, \sigma}\|_\infty \leq \varepsilon_1$ for small enough $\varepsilon_1$) but not limited to Algorithm 1. * **Wasserstein uncertainty set.** We concur that Wasserstein distance is also an option for the uncertainty set, but it's relatively new in robust RL [2]. Investigating Wasserstein sets based on our findings presents an interesting direction. ### 3. New experiments to verify the theoretical claims for the proposed algorithm. We add new numerical experiments to verify the theoretical findings of the proposed algorithm as the reviewer suggested. Please refer to **General response** and uploaded **PDF** for details. ### 4. How Algorithm 1 addresses the uncertainty issue. To address the uncertainty that the transition kernel $\mathcal{P}$ may perturbs from the (estimated) nominal kernel $\widehat{P}^{0}$ inside the set $\mathcal{U}^\sigma(\widehat{P}^{0})$, each value iteration step in Algorithm 1 involves an additional infimum operator over $\mathcal{U}^\sigma(\widehat{P}^{0})$ as below (line 5 of Algorithm 1): $$\widehat{Q}\_t = r + \gamma \inf\_{ \mathcal{P} \in \mathcal{U}^\sigma(\widehat{P}^{0})} \mathcal{P} \widehat{V}\_{t-1}.$$ In words, Algorithm 1 addresses the uncertainty by considering the worst case performance when the transition kernel is arbitrarily drawn in the uncertainty set $\mathcal{U}^\sigma(\widehat{P}^{0})$. And in addition, there won't be an issue when the uncertainty set size increases with $\sigma$ since Algorithm 1 knows the size ($\sigma$) and will solve the corresponding problem based on the size. ### 5. Discussion about the range of $\varepsilon$ in Theorem 1 Theorem 1 in this work holds for a large range of accuracy level when $\varepsilon \in \left(0, \sqrt{1/\max\{1-\gamma, \sigma\}} \right]$. We leave the extension to full range as future work. Note that such extension is non-trivial that may require additional technical tools, such as [4] extend the range $\varepsilon \in \left(0, \sqrt{1/1-\gamma} \right]$ in [3] (the one reviewer mentioned) to full range in standard RL case. ### 6. Improving the specification. * **The uncertainty set geometry.** The formal definition of the uncertainty set is shown in Equation (3): $$\mathcal{U}\_\rho^{\sigma}(P^0) := \otimes \; \mathcal{U}\_\rho^{\sigma}(P^0\_{s,a})\qquad \text{with}\quad \mathcal{U}\_\rho^{\sigma}(P^0\_{s,a}) := \left\\{ P\_{s,a} \in \Delta (\mathcal{S}): \rho \left(P\_{s,a}, P^0\_{s,a}\right) \leq \sigma \right\\}.$$So the geometry of the uncertainty set (can be seen as a ball) is determined by two factor: 1) the divergence function $\rho(\cdot)$ which determine the 'distance' between two distribution point; 2) uncertainty level $\sigma$ that represents the radius of the uncertainty ball. * **Polishing the appendix.** We revise all the typos that the reviewer mentioned and polish the paper again. ### 7. Discussing the limitation of the generative model setting and model-based algorithms * **Generative model setting is a good starting point.** We believe understanding robust RL in the fundamental generative model setting is highly nontrivial and plays an essential role in shaping the theoretical foundation of robust RL, where all prior analyses still fail to give a clear message. Theoretical underpinnings in this setting are critically needed before addressing more complex cases, e.g., online/offline robust RL. * **Limitation of model-based RL.** We believe our work lays a solid foundation to design and understand the counterpart of model-based RL --- model-free RL which does not require estimating the model explicitly. > [1] Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 368 30(2):257–280. [2] Xu, Zaiyan, Kishan Panaganti, and Dileep Kalathil. "Improved sample complexity bounds for distributionally robust reinforcement learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.\ [3] Agarwal, Alekh, Sham Kakade, and Lin F. Yang. "Model-based reinforcement learning with a generative model is minimax optimal." Conference on Learning Theory. PMLR, 2020.\ [4] Li, Gen, et al. "Breaking the sample size barrier in model-based reinforcement learning with a generative model." Advances in neural information processing systems 33 (2020): 12861-12872. --- Rebuttal Comment 1.1: Title: Response to Rebuttal by Authors Comment: Thanks for providing detailed and concrete comments on my concerns. I agree with most of the comments and justifications provided by the authors. I agree that indeed Generative model setting is a good starting point and potentially can extend to setting removing this assumption and this concern most originated regarding the claims and Point 2. Although, its reasonable to start with Generative model setting. The insights are interesting and also want to thank the authors for providing the additional experiment which supports the hypothesis. I am still not clear regarding the description on Point 1 especially the point on the second part " ....the sample complexity of RMDPs becomes smaller than MDP, which is not very intuitive". Can the authors clarify on this further for my understanding. Also, the Wasserstein set in the context of robust MDP is not very new [1,2] and would request the authors to include a discussion on the same, 1. Esther Derman and Shie Mannor. (2020). Distributional Robustness and Regularization in Reinforcement Learning. Retrieved from https://arxiv.org/pdf/2003.02894.pdf 2. Mohammed Amin Abdullah, Hang Ren, Haitham Bou Ammar, Vladimir Milenkovic, Rui Luo, Mingtian Zhang, & Jun Wang. (2019). Wasserstein Robust Reinforcement Learning. Retrieved from https://arxiv.org/pdf/1907.13196.pdf --- Reply to Comment 1.1.1: Title: Response to reviewer fPn4 Comment: Dear Reviewer: Thank you so much for engaging in discussion with us and provide insightful feedback! We are grateful that the reviewer found our experiments helpful! We shall answer your questions and as following. ### More intuition about solving RMDPs with TV uncertainty is easier than standard MDPs. Thank you so much for raising this question since it is a key finding in this work --- promoting additional robustness in RL algorithms sometimes can be a free lunch in terms of sample cost. We shall briefly show the key technical intuition and hope this will be helpful. The difficulty of solving standard RL or robust RL is mainly determined by the following error terms. Given the same number of samples (same model estimate $\widehat{P}^0$), a smaller error term means the task is easier. $\text{Standard RL:} \quad \quad \delta\_{\text{RL}} = \underset{ {\color{blue}{\bf \text{ linear}}} \text{ w.r.t. } P^0 - \widehat{P}^0 }{\underbrace{\Big| P^0\widehat{V} -\widehat{P}^0\widehat{V} \Big|}}$ $\text{Robust RL:} \quad \delta\_{\text{robust RL}}= \underset{ {\color{red}{\bf \text{complex form} } } \text{ w.r.t. } P^0 - \widehat{P}^0 \text{ due to inner problem over uncertainty set } \mathcal{U}^\sigma\_\rho(\cdot)}{\underbrace{ \Big|\inf\_{\mathcal{P}\in \mathcal{U}^\sigma\_\rho\left(P^0 \right)} \mathcal{P}\widehat{V}^{\sigma}\_{\text{rob}}- \inf\_{\mathcal{P} \in \mathcal{U}^\sigma_\rho\left(\widehat{P}^0 \right)} \mathcal{P}\widehat{V}^{\sigma}_{\text{rob}} \Big|}}$ The error terms mainly depend on two factors: 1) the relationship w.r.t. the model estimate error $P^0 - \widehat{P}^0$; 2) the range of the value function $\widehat{V}$ or $\widehat{V}^{\sigma}\_{\text{rob}}$. * **Using TV uncertainty set: easier than standard RL.** In this case, the error term is shown to be $\delta\_{\text{robust RL}} = \Big| P^0\widehat{V}^{\sigma}\_{\text{rob}} -\widehat{P}^0\widehat{V}^{\sigma}\_{\text{rob}} \Big|$ that is also linear w.r.t. $P^0 - \widehat{P}^0$ --- the same as standard RL. While the range of robust value function $\widehat{V}^{\sigma}\_{\text{rob}}$ in robust RL decrease rapidly as level $\sigma$ increasing and becomes smaller than the range of $\widehat{V}$ in standard RL, since the values in all states are pushed toward the minimum one and become close to each other. As a result, the error term of robust RL $\delta\_{\text{robust RL}}$ becomes smaller than standard RL $\delta_{\text{RL}}$ as $\sigma$ grows, i.e., robust RL becomes easier than standard RL. ### Using Wasserstein distance in Robust RL. Thanks for raising this question and providing important related works [1][2]. We will definitely add these references and provide more discussions in the related work section. We agree with the reviewer that the use of Wasserstein distance for robustness in RL has been explored by previous works [1-4]. However, these prior investigations using Wasserstein in RL either concentrate on empirical algorithms [2-3] or robust formulations distinct from robust MDPs (as considered in our work), until the recent study [4]. Specifically, [1] addresses a different robust formulation through regularization using Wasserstein distance and elucidates the theoretical connection between regularization and robust MDPs. We believe that the exploration of the Wasserstein uncertainty set in robust MDPs is far from mature [4] and is a interesting future work. Although our current work focuses on $f$-divergence (TV or $\chi^2$), the technical tools and discoveries in this study have substantial potential for Wasserstein distance, offering further insights. **We thank again for the reviewer's time and would be glad to discuss further if there are additional concerns.** [1]Esther Derman and Shie Mannor. (2020). Distributional Robustness and Regularization in Reinforcement Learning. Retrieved from https://arxiv.org/pdf/2003.02894.pdf [2]Mohammed Amin Abdullah, Hang Ren, Haitham Bou Ammar, Vladimir Milenkovic, Rui Luo, Mingtian Zhang, & Jun Wang. (2019). Wasserstein Robust Reinforcement Learning. Retrieved from https://arxiv.org/pdf/1907.13196.pdf [3]Hou, Linfang, et al. "Robust reinforcement learning with Wasserstein constraint." arXiv preprint arXiv:2006.00945 (2020). [4]Xu, Zaiyan, Kishan Panaganti, and Dileep Kalathil. "Improved sample complexity bounds for distributionally robust reinforcement learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. --- Reply to Comment 1.1.2: Title: Kindly reminder of the discussion ending Comment: Dear reviewer, Thank you once again for engaging in the discussion with us and providing insightful feedback. As the discussion period will end within the next day, we kindly ask that you review our responses to ensure that we have addressed your concerns. If we have met your expectations, we would greatly appreciate your consideration in raising your support for this paper! Best, Authors
Summary: This paper presents new upper and lower bounds for distributionally robust MDPs where the uncertainty set for the transition kernel is specified as a ball with $\chi^{2}$ divergence or total variation as the distance measure. The bounds significantly improve previous results. Strengths: The paper is generally well-written, and is easy and interesting to read. The results significantly improve existing bounds and are surprising. However, I didn't read the proofs. Weaknesses: * The notations $\lesssim$ and $\asymp$ should be defined. While $\lesssim$ is often used in many places, it seems it should be just $\le$ or $<$ according to the theorems. * The distributionally robust value iteration algorithm is taken from (Yang et al., 2022; Iyengar, 2005), but Section 3 describes the algorithm as if this is a contribution from this paper. * The bounds usually do not hold for all $\gamma \in (0, 1)$, thus there are still some gaps in the results, while the paper did not make this clear. An intuitive discussion on why the results do not hold for $\gamma$ values outside the ranges given in the theorems are helpful. * I find the statements of Theorem 2 and Theorem 4 unclear. A sample complexity lower bound result should show that no algorithm can solve a problem given fewer than a certain number of samples. However, these theorems do not mention algorithms, but instead mention two robust MDPs and take infimum with respect to the random policy $\hat{\pi}$. Did I miss something? * I find the lower bound in Theorem 4 very surprising, as the lower bound is discontinuous and non-monotonic. I note that the paper claim the bounds are at least nearly tight, but is it possible that the lower bounds are not sufficiently tight? Can an intuitive explanation for the discontinuity and non-monotonicity be given? * There are some discussions on whether uncertainty of the transition kernel makes learning a $\epsilon$-optimal value function harder, and the discussion is based on comparison of sample complexity lower bounds for robust MDPs and standard MDPs. However, this may be misleading as the lower bounds can be underestimating the complexity of a problem. * The appendix has some broken references (??). Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would appreciate responses to the points regarding the bounds in Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper didn't discuss the limitations. I believe the paper should at least point out that the results do not hold for all $\gamma$ values and provide a discussion on this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewer for their insightful feedback and valuable comments. ### 1. The range of the discounted factor $\gamma$ that is considered in our theorems. Recall that the discounted factor $\gamma$ determines the effective horizon length $\frac{1}{1-\gamma}$ of RL tasks, i.e., larger $\gamma$ leads to tasks with longer horizons. * **We consider the entire reasonable range.** We would like to highlight that the main feature and challenge of RL tasks is the **sequential** structure. So when $\gamma$ is small (e.g., $\gamma \in (0, \frac{1}{2}]$ leads to the effective horizon length is at most $2$), the sequential structure almost disappears and is of much less interest for RL community. So people usually focus on reasonable range $\gamma \in (c, 1)$ for some small positive constant $c$ [1][2], such as $\gamma \in [\frac{1}{2},1)$. The situations when $\gamma\in (0, c]$ are generally not paid attention by the RL community. * **Our results can be directly extended to a broader range of $\gamma$.** Recall that all of our four theorems at least hold for $\gamma \in (\frac{3}{4}, 1)$. The theorems can be directly extended to a broader range of $\gamma \in (c, 1)$ along with $c$ as small as desired so that almost cover the full range $(0,1)$. ### 2. Improving the statement of Theorem 2 and 4. The reviewer is correct that the lower bound means that no algorithm (estimator) can achieve the desired accuracy, given the sample size that is fewer than a certain number. So $\hat{\pi}$ in Theorem 2 and 4 does not represent a random policy, but represents the output of any algorithm (estimator). We will definitely improve the statement and make the definition of $\hat{\pi}$ more clear. ### 3. More explanation of the lower bound in Theorem 4. Thanks for being interested in our results. * **A nearly tight lower bound.** The lower bound is nearly tight regarding that: 1) it is tight at least for a certain range of uncertainty levels (when $\sigma \in O(1)$), since it matches with the upper bound in Theorem 3 which verifies that the lower bound in this range can't be improved anymore. In addition, the lower bound nearly matches with the upper bound up to the term about $\gamma$ when $\sigma \gtrsim O(\frac{1}{1-\gamma})$. The reviewer can refer to Figure 1(b) for illustration. While in other cases of $\sigma$, the reviewer is right that the lower bound may not be tight enough, which we leave to future work. * **Intuition of the non-monotonicity shape.** We highlight that as the uncertainty level $\sigma$ varies, the sample requirement may have different behaviors due to uncertainty set changing--- which is an important message from this work. However, we agree with the reviewer that the non-monotonicity is counterintuitive and may can be improved. Although we significantly improve the prior lower bound, we believe there may exist a gap towards the optimal one. The gap may be due to we construct the same set of hard instances (RMDPs) when $\sigma$ varies. While for different $\sigma$, distinct hard instances may be required to achieve tighter lower bound --- need specific designs. ### 4. Comparing the difficulty of robust RL and standard RL in terms of sample requirement --- only based on lower bound? We compare the difficulty of robust and standard RL by comprehensively summarizing the information of both lower bounds and upper bounds. Specifically, the sample complexity of standard RL was settled as $\widetilde{O}\left(\frac{SA}{(1-\gamma)^3\varepsilon^2} \right)$ by the matched lower bound and upper bound [3]. The messages in this work are: 1) robust MDPs are easier to learn than standard MDPs under the TV distance. We arrive at this conclusion by settling down the sample complexity of robust MDPs with as $\widetilde{O} \left( \frac{SA}{(1-\gamma)^2\varepsilon^2} \min \left\\{ \frac{1}{1-\gamma}, \frac{1}{\sigma} \right\\} \right)$ through showing the matched lower bound (Theorem 2) and upper bound (Theorem 1); 2) robust MDPs can be harder to learn than standard MDPs under the $\chi^2$ divergence. This claim is reasonable since the derived sample complexity lower bound for robust MDPs (Theorem 4) already exceeds the sample complexity of standard MDPs, at least for a certain range of uncertainty levels (see Figure 1(b)). Although the lower bound in Theorem 4 may not be sufficiently tight, it already shows that robust MDPs can be harder than standard MDPs. ### 5. Improving clarity and writing. Thanks for the careful review and valuable suggestions. * **Confusion of Section 3.** The reviewer is correct that we didn't propose a new algorithm but concentrated on understanding existing methods. We shall definitely highlight that Algorithm 1 in Section 3 was proposed by [4] and is just an example. While indeed, our sample complexity upper bound can work for any model-based algorithms obeying certain conditions (can learn a policy $\widehat{\pi}$ obeying $\|\widehat{V}^{\star, \sigma} - \widehat{V}^{\widehat{\pi}, \sigma}\|_\infty \leq \varepsilon_1$ for small enough $\varepsilon_1$) but not limited to Algorithm 1. * **Notation and definitions of $\lesssim, \asymp$.** We add clarification for these notations in the main text. * **Typos in the appendix.** We fixed the typos mentioned by the reviewers and have polished the main text and appendix again. > [1] Li, Gen, et al. "Settling the sample complexity of model-based offline reinforcement learning." arXiv preprint arXiv:2204.05275 (2022). \ [2] Yan, Yuling, et al. "The efficacy of pessimism in asynchronous Q-learning." IEEE Transactions on Information Theory (2023). \ [3] Li, Gen, et al. "Breaking the sample size barrier in model-based reinforcement learning with a generative model." Advances in neural information processing systems 33 (2020): 12861-12872. \ [4] Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 368 30(2):257–280. --- Rebuttal Comment 1.1: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors --- Reply to Comment 1.1.1: Title: Kindly reminder of the discussion ending Comment: Dear reviewer, Thank you once again for dedicating your valuable time to provide feedback on our paper. As the discussion period will end within the next day, we kindly ask that you review our responses to ensure that we have fully addressed all of your concerns. If we have met your expectations, we would greatly appreciate your consideration in raising your support for this paper! Best, Authors --- Rebuttal Comment 1.2: Title: discussion Comment: Thanks for your response. Re the range of $\gamma$, it's not true that RL papers focus on "reasonable ranges": quite a few previous works only assume $\gamma \in (0, 1)$. For example, see Agarwal et al. 2020 (which is cited in your paper) and the references given there. Is there a technical reason that the range $(0, 1)$ can't be used? While your response says the results can be extended to a broader range, the way it is worded seems to suggest that simply replacing the restricted intervals by $(0, 1)$ doesn't work? Since the main text isn't updated, can you also clarify what $\lesssim$ and $\asymp$ mean in this paper? --- Reply to Comment 1.2.1: Title: Thank you engaging in the discussion and providing insightful feedback! Comment: Dear reviewer, ### The range of $\gamma$. Thank you for raising this question! We totally agree with the reviewer that many works deal with the full range $\gamma\in(0,1)$. We are sorry about the confusion: we do not mean no one considers $\gamma\in(0,1)$, but just saying researchers usually do not consider two results as too different if the only difference is that one works for $\gamma\in(0,1)$ and the other works for $\gamma\in(\frac{1}{2}, 1)$. The reason is that tasks with $\gamma$ being very small will make the important sequential structure of RL disappear and is of much less interest. In addition, our Theorems **can work for the full range $\gamma\in(0,1)$** by adapting some numerical numbers in the proof. Since the assumption for $\gamma\in [c, 1)$ is just for calculation convenience such as in Equation (70) of Appendix C.2. As long as $\gamma$ is a constant obeying $\gamma \in (0,1)$, we can have the same conclusion. ### Notation and definitions of $\lesssim, \asymp$ Thank you for raising this question. We are sorry about this since we hope to include this in the first response but delete it due to the space limit. We will add this in the introduction for clarification. > Here and throughout, we use the standard notation $f(n)=O(g(n))$ to indicate that $f(n)/g(n)$ is bounded above by a constant as $n$ grows. The notation $f(\mathcal{X}) = O(g(\mathcal{X}))$ or $f(\mathcal{X}) \lesssim g(\mathcal{X})$ indicates that there exists a universal constant $C_1>0$ such that $f\leq C_1 g$, the notation $f(\mathcal{X}) \gtrsim g(\mathcal{X})$ indicates that $g(\mathcal{X}) = O(f(\mathcal{X}))$, and the notation $f(\mathcal{X})\asymp g(\mathcal{X})$ indicates that $f(\mathcal{X}) \lesssim g(\mathcal{X})$ and $f(\mathcal{X}) \gtrsim g(\mathcal{X})$ hold simultaneously. Thank you once again for engaging in discussion with us. Hope that our answers are helpful and we look forward to further discussion if you have additional concerns. Best, Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading of the paper and their insightful and valuable feedback. Below we provide some new numerical results to corroborate the theoretical findings in this work. ### New numerical results As reviewers suggested, we add new experiments to corroborate and demonstrate the theoretical findings in this paper: * **Experimental settings.** We demonstrate the sample size requirements in robust RL when the uncertainty level $\sigma$ varies. Specifically, we evaluate robust value iteration (Algorithm 1) in the following robust MDP $\mathcal{M}_\phi= \left(\mathcal{S}, \mathcal{A}, P^0, r, \gamma \right)$ illustrated in the uploaded **PDF Figure 2(a)**, where $\gamma$ is the discount parameter, $\mathcal{S} = \{0, 1\}$, $\mathcal{A} = \{0, 1\}$, nominal transition kernel $P^0$ obeys $P^0(1|1,0) = P^0(1|1,1) =1, P^0(1|0,0) =p, P^0(1|0,1) =q$, reward $r(0,0) = r(0,1) =0, r(1,0) = r(1,1) = 1$. Denote N as the sample size per state-action pair. For each point $(N, \gamma, \sigma)$, we conduct 100 Monte Carlo simulations and claim $N$ successfully attain $\varepsilon$ accuracy if the accuracy is achieved at least $95$ times. * **Results with TV uncertainty set.** For TV uncertainty set, we focus on the effect of the discount complexity $\frac{1}{1-\gamma}$, which dominates the difference of robust RL compared to standard RL (see Figure 1(a)). We set $p = 2 \max(1-\gamma, \sigma)$, and $q = p - 16(1-\gamma)\max(1-\gamma, \sigma)\varepsilon$ inspired by our lower bound, and fix $\varepsilon= 0.13$ (a randomly chosen small value). The results in **PDF Figure 2(b)** show that the numerical sample complexity per state-action pair $N$ scales on the order of $\frac{1}{(1-\gamma)^3}$ when the uncertainty level is small ($\sigma = 0.005$), while on the order of $\frac{1}{(1-\gamma)^2}$ when the uncertainty level is large ($\sigma = 0.3$). The results match the derived sample requirements illustrated in Figure 1. * **Results with $\chi^2$ uncertainty set.** For $\chi^2$ uncertainty set, we focus on the effect of $\sigma$ especially when $\sigma$ is very large, since the exploded sample requirement as $\sigma$ increases is a key finding. We set $q = \frac{\sigma}{1+\sigma}$, $p = q + \frac{8}{3(1+\sigma)}\varepsilon$, and fix $\varepsilon= 0.13$ and $\gamma = 0.9$. **PDF Figure 2(c)** demonstrates that the required sample complexity increases linearly w.r.t. the uncertainty level $\sigma$ when $\sigma$ is large (in the range $(\frac{1}{1-\gamma},\infty)$), which matches our theoretical findings (see Figure 1). We would be grateful if the reviewers could take a look at the responses and consider raising support for this work if we have addressed your concerns adequately. We would be glad to discuss further if there are additional concerns. Pdf: /pdf/af70963b2a3dc420dcc97d0292d7f35d5fbb02c9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on the sample complexity of learning robust MDPs under generative model access, with uncertainty sets measured with TV and chi-squared divergences. The paper proposes tight upper and lower bounds for sample complexity under TV robustness and chi-squared robustness (under some range of the uncertainty radius). Curiously, the minimax bound is smaller than standard MDPs. The upper bound is attained by a model-based distributionally robust value iteration algorithm. (changed score from 4 -> 6 during discussion.) Strengths: 1. The paper provides tight bounds for the TV case, and somewhat tight bounds for the chi-squared case, and illustrates the interesting phenomenon where the minimax optimal bounds are sometimes easier/harder than standard RL. Weaknesses: 1. The presentation can be improved. For example, the algorithm box 1 is very bare and does not contain all the necessary details for implementation. Also, computational efficiency is a key issue in robust MDPs and the current version brushed it aside (in Line 221) into the appendix, which is 46 pages. It would be better to present it clearly in the main paper. Also, what makes this algorithm different from Yang et al, 2022? Is it just an improved analysis or something fundamentally changed in the algorithm to obtain minimax-optimal bounds? 2. The robust VI algorithm seems not novel, as it is a common method for robust MDPs, eg. Iyengar 2005. 3. While minimax-optimal theoretical results are nice, there is limited discussion on the practicality of the algorithm, besides its worst-case sample complexities. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Which parts of the current analysis breaks when trying to extend to more general MDPs, like linear MDPs? 2. I really like Figure 1, but I do not understand the intuition on why robust MDP is easier/harder in certain cases. I can certainly follow the algebra based on minimax optimal bounds, but can you describe at an intuitive level why this "curious price" appears? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank gratefully the reviewer for various valuable suggestions and the praise of our interesting findings! ### 1. Questions about Algorithm 1 in Section 3. Thanks for raising questions about algorithm 1. We shall address them as below: * **Improving the specification of Algorithm 1 and its computation efficiency.** The introduced Algorithm 1 in Section 3 --- robust value iteration (VI) is a well-known algorithm proposed by prior art [1], which is efficient with computation cost $O(S \log(S))$ per iteration --- with only a modest increase compared to standard VI ($O(S)$). We shall introduce more details of the update rule and computation complexity in the main text to keep this paper self-consistent. * **Does this work propose a new algorithm?** The brief answer is no. We highlight that this work focuses on investigating the inherent difficulty of solving robust MDPs in terms of sample cost to provide a solid foundation for algorithm design. So we didn't propose new algorithms but concentrated on understanding existing ones, where we choose robust VI (Algorithm 1) as an example to develop sample complexity upper bound. While indeed, our statistical results can work for any model-based algorithms obeying certain conditions (can learn a policy $\widehat{\pi}$ obeying $\|\widehat{V}^{\star, \sigma} - \widehat{V}^{\widehat{\pi}, \sigma}\|_\infty \leq \varepsilon_1$ for small enough $\varepsilon_1$) but not limited to Algorithm 1. * **Practicity of Algorithm 1.** We add new experiments to evaluate Algorithm 1 and demonstrate our theoretical findings (please check the **General response** for details). In addition, Algorithm 1 with TV uncertainty or $\chi^2$ uncertainty set can be applied to more complex tasks and achieve robust performance when the testing environment deviates from the training one, such as Gambler’s problem and Frozen Lake environment in OpenAI Gym [3]. ### 2. Challenges of extending to general MDP like linear MDPs. We believe the findings of current results in tabular cases lays a solid foundation to carry out general MDP cases with function approximation, e.g., the finding of using a simple divergence function such as TV may lead to less sample size requirement. While the entire pipeline in this work will need to adapt since general MDP cases (e.g., linear MDPs) will require distinct problem formulations --- still an open problem with few studies [2], algorithm design, and theoretical analysis framework --- requires more assumptions such as linearization for linear MDPs [4] and realizability/low-rank structure for general function approximation. ### 3. Intuitions of why robust MDPs are easier or harder than standard MDPs in certain cases. Thanks for liking our illustration in Figure 1. The difficulty of solving standard RL or robust RL is mainly determined by the following error terms. Given the same number of samples (same $\widehat{P}^0$), a smaller error term means the task is easier. $\text{Standard RL:} \quad \quad \delta\_{\text{RL}} = \underset{ {\color{blue}{\bf \text{ linear}}} \text{ w.r.t. } P^0 - \widehat{P}^0 }{\underbrace{\Big| P^0\widehat{V} -\widehat{P}^0\widehat{V} \Big|}}$ $\text{Robust RL:} \quad \delta\_{\text{robust RL}}= \underset{ {\color{red}{\bf \text{complex form} }} \text{ w.r.t. } P^0 - \widehat{P}^0 \text{ due to inner problem over uncertainty set } \mathcal{U}^\sigma\_\rho(\cdot)}{\underbrace{ \Big|\inf\_{\mathcal{P}\in \mathcal{U}^\sigma\_\rho\left(P^0 \right)} \mathcal{P}\widehat{V}\_{\text{rob}}- \inf\_{\mathcal{P} \in \mathcal{U}^\sigma\_\rho\left(\widehat{P}^0 \right)} \mathcal{P}\widehat{V}\_{\text{rob}} \Big|}}$ The error terms mainly depend on two factors: 1) the relationship w.r.t. the model estimate error $P^0 - \widehat{P}^0$; 2) the range of the value function $\widehat{V}$ or $\widehat{V}_{\text{rob}}$. * **Using TV uncertainty set: easier than standard RL.** In this case, the error term is shown to be $\delta\_{\text{robust RL}} = \Big| P^0\widehat{V}\_{\text{rob}} -\widehat{P}^0\widehat{V}\_{\text{rob}} \Big|$ that is also linear w.r.t. $P^0 - \widehat{P}^0$ --- the same as standard RL. While the range of robust value function $\widehat{V}\_{\text{rob}}$ in robust RL decrease rapidly and becomes smaller than the range of $\widehat{V}$ in standard RL, since the values in all states are pushed toward the minimum one and become close to each other. As a result, the error term of robust RL $\delta_{\text{robust RL}}$ becomes smaller than standard RL $\delta_{\text{RL}}$ as $\sigma$ grows, i.e., robust RL becomes easier than standard RL. * **Using $\chi^2$ uncertainty set: can be harder than standard RL.** In this case, the error term of robust RL $\delta_{\text{robust RL}}$ is non-linear w.r.t. and sensitive to model estimate error $P^0 - \widehat{P}^0$, which can induce large error term even if $P^0 - \widehat{P}^0$ is small, especially when $\sigma$ is large (e.g., $\sigma > O(\frac{1}{1-\gamma})$). Standard RL has an error term that is linear to $P^0 - \widehat{P}^0$, while the error term of robust RL may explode even $P^0 - \widehat{P}^0$ is small, namely, robust RL becomes much harder than standard RL. > [1] Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 368 30(2):257–280. \ [2] Ma, Xiaoteng, et al. "Distributionally robust offline reinforcement learning with linear function approximation." arXiv preprint arXiv:2209.06620 (2022). \ [3] Panaganti, Kishan, and Dileep Kalathil. "Sample complexity of robust reinforcement learning with a generative model." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. \ [4] Jin, Chi, et al. "Provably efficient reinforcement learning with linear function approximation." Conference on Learning Theory. PMLR, 2020. --- Rebuttal 2: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors --- Rebuttal Comment 2.1: Title: Thank you raising the score and supporting this work! Comment: Dear reviewer, Thank you so much for raising the score and increasing the support for this work! We truly value your feedback and hope that we have addressed your insightful concerns and questions adequately. Best, Authors
null
null
null
null
null
null
A Scalable Neural Network for DSIC Affine Maximizer Auction Design
Accept (spotlight)
Summary: AMAs is a parametric family of auctions that generalizes VCG and that is exactly DSIC (unlike mechanisms found with alternative regret-based approaches) and IR. This paper proposes a deep variant of AMAs. In particular, AMAs parameters are learned as outputs of a permutation-equivariant attention-based network. Near-optimal mechanisms within AMA family can be found by optimizing this network. it can be applied to both contextual and classic auctions. The theory is provided to show that the learned auctions are indeed DSIC and IR. Extensive experiments and ablations support the method’s usefulness. I will begin by saying that a deep variant of AMAs is a glaring gap in the literature, considering the popularity of deep regret-based approaches. Someone has to do it, and if done correctly, a paper like this absolutely should be published. Unfortunately, in my opinion, this particular version is not ready to be published yet. Strengths: - The paper is clear and well-written. It cites and discusses the relevant literature and places itself within the context of the literature. - The goal of the paper, namely, a deep variant of AMAs, is worth pursuing. The paper is relevant to the conference. - The experiments and ablations that are provided are extensive and look sound. I especially like the case study in Fig 3. I was surprised to see such diverse allocations, and the allocation acting as a reserve price is especially cool. - The theory about the auctions being DSIC and IR is an essential part of the paper. Weaknesses: - My biggest worry is that experimental comparison is unfair. Authors claim in Table 1b that they “omit CITransNet since it is designed for contextual auctions” and proceed to compare their approach to RegretNet. RegretNet is based on fully-connected layers, whereas AMenuNet uses attention layers. Ivanov et al. (2022) show that attention layers improve revenue for the same regret levels in auctions with larger input sizes. So CITransNet (and, ideally, RegretFormer) should be included for comparison in auctions without context, as they are expected to achieve higher revenue than RegretNet and AMenuNet. For example, in the 3x10 auction, the paper reports 5.59 revenue of AMenuNet vs 5.54 revenue of RegretNet. Judging from these results, it might seem that AmenuNet is strictly better than RegretNet (same revenue but always DSIC). However, RegretFormer in its paper reports more than 6.1 revenue in 3x10 for the regret of <= 0.005 (as indicated in Table 1, “The regret of both CITransNet and RegretNet is less than 0.005”). By the way, this only takes 2 attention-based blocks, as opposed to 3-5 used by AMenuNet. If this is included in the comparison with AMenuNet, the conclusion would be that zero regret actually comes at a cost of revenue. This would not diminish the authors' contributions or the usefulness of their method, but would simply be a more sound experimental design. - As Rahme et al. (2021) show, RegretNet is hyperparameter-sensitive. Choosing the wrong hyperparameters may decrease performance. The paper includes some settings that were not explored in the original RegretNet paper, like 3x1 and 5x5. However, in the appendix, it is only stated: “CITransNet and RegretNet. We follow the same hyperparameters in Duan et al. [2022] and Dütting et al. [2019].”. So for 3x1 and 5x5, it is not reported how the hyperparameters are selected, potentially resulting in unfair comparison. I would appreciate it if the authors could clarify this for me. As a suggestion, both Rahme et al. (2021) and Ivanov et al. (2022) propose alternative loss functions that are less sensitive to hyperparameters. - A potential issue with AMenuNet is that it requires a hand-selected menu size. I wonder if larger settings would require larger menu sizes and if this introduces scaling issues. An ablation would be welcome where AMenuNet is given different menu sizes in some small and large settings and we examine revenue as a function of the menu size. E.g., we could see that the approach performs well in settings both large and small given the same small menu sizes, in which case there are no scaling issues. - Compared to CITransNet or RegretFormer, there is nothing novel about the architecture, besides the softmax trick with annealed temperature to work with deterministic allocations. This is not a weakness per se, just a neutral fact. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How are hyperparameters selected for RegretNet in novel settings (that did not appear in its original paper)? - Why do the authors think the learned menu is so diverse? If all allocations are trained on the same loss function, why is there no mode collapse (i.e., I could expect one “best” allocation to be repeated in all menu positions)? - 176-177: “In non-contextual auctions, similar 177 to word embedding [Mikolov et al., 2013], we embed the unique ID of each bidder or item…”. This looks like positional encoding. Doesn’t it make learned mechanisms not permutation equivariant? The network could discriminate items and participants. Shouldn’t the same dummy ID be provided for all participants and items for true permutation equivariance? - Why does AMenuNet outperform Lottery AMA, given that the parameters of AMAs are learned in both cases? We are used to attention outperforming everything else, but there is nothing to attend to in non-contextual auctions (as I understand, the input is always the same combination of IDs), so this is surprising to me. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: For reasons mentioned in the weaknesses section, I think the method is a bit oversold. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Your comments raise very good questions, and we will address them and the other concerns that you have raised. **Q1: About the experiments on RegretNet-based methods.** We understand the suggestion to include CITransNet and RegretFormer in the comparison for auctions without context, as they are expected to achieve higher revenue than RegretNet. However, it's important to note that our paper focus on comparing our methods against other DSIC approaches instead of the non-DSIC RegretNet-based algorithms. The comparison with RegretNet and CITransNet serves as supplementary insights rather than central facets of our study. We invite you to read the detailed discussion in Q1 of our general response for a more comprehensive discussion. > If this is included in the comparison with AMenuNet, the conclusion would be that zero regret actually comes at a cost of revenue. While incorporating CITransNet and RegretFormer in the comparison would indeed support the conclusion that "zero regret comes at a cost of revenue," our experimental results already demonstrate this trend: 1. In the contextual auction experiments, CITransNet outperformed AMenuNet in 9 of 10 instances. 2. In the classical auction experiments, RegretNet outperformed AMenuNet in 4 of 6 instances. **Q2: About the hyperparameters selection of RegretNet in novel settings** For 5x5(C), we set the hyperparameters of RegretNet as reported in the original paper for '5x10 uniform' setting. Such choice was made due to the close relationship between these two cases. As indicated in Table 1(b), RegretNet with these hyperparameters performed well in 5x5(C), outperforming all other methods in terms of revenue. Similarly, for 3x1(D), we set the same configuration used for '1x2 uniform' in the original paper. Table 1(b) shows that RegretNet achieved revenue close to the optimal solution under these hyperparameters. **Q3: About the ablation study with respect to menu size.** We appreciate the suggestion for conducting an ablation study to analyze the impact of different menu sizes on AMenuNet's performance. We conduct experiments on AMenuNet using various menu sizes in two uniform (setting C) scenarios: | Menu Size | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | | ------------ | ------ | ------ | ------ | ------ | ------ | ------ | --- | | 3x5 uniform | 2.6336 | 2.7125 | 2.7470 | 2.7848 | 2.8051 | 2.8099 | 2.8110 | | 3x10 uniform | 5.1853 | 5.2348 | 5.4042 | 5.4932 | 5.5941 | 5.6714 | 5.7349 | We can see that AMenuNet's revenue performance improves as the menu size increases in both the 3x5 uniform and 3x10 uniform settings. **Q4: About why the learned menu is diverse.** The learned menu exhibits diversity for several reasons: 1. The menu includes allocations that are not only candidates for the winning allocation $A^*$ but also candidates for the allocation $A^*_{-k}$ that maximizes the affine social welfare for each bidder $k \in [n]$. This requirement for non-zero revenue leads to at least two different allocations within the menu, resulting in no unique allocation. 2. Although all allocations are trained using the same loss function, gradients for each allocation within a training batch differ. The use of the softmax function to approximate the winning allocation $\hat{A}^*$ during training means that only the best allocation (i.e., the one with the highest score) receives a significant gradient during backpropagation. 3. The diversity is further enhanced by sampling valuation profiles from a prior distribution during training. Different valuation profiles within the training batch can lead to various best allocations. **Q5: About the embeddings of the IDs** To ensure the mechanism can handle diverse and heterogeneous participants and items effectively, the use of unique ID embeddings is essential. For instance, in Myerson auctions [Myerson, 1981], the auctioneer relies on prior knowledge of each bidder's valuation profile, making it essential to correctly identify each bidder for accurate prior distribution estimation. Using the same dummy ID for all participants and items would hinder the mechanism's ability to differentiate between different bidders and items, especially in cases of heterogeneity. Consequently, the induced auction mechanism would be limited to symmetric settings, as seen in Rahme et al. 2021. **Q6: About the comparison between AMenuNet and Lottery AMA.** We discussed the potential reasons for AMenuNet's superior performance compared to Lottery AMA in Q2 of our general response. In short, AMenuNet's neural network effectively captures the correlation of AMA parameters, and its over-parameterization provides advantages for optimization, leading to improved results. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thank you for the detailed answers to my questions! 1. I understand your position and let me state again that I do think the paper makes a valuable contribution. That said, comparing with relevant baselines is essential, and I guess we disagree on what are the relevant baselines in this case. I would not mind as much if it only was the case that these baselines are likely to outperform the method in question. But they are using similar attention-based architectures. And the paper does compare with RegretNet (as it should) which uses a "weaker" architecture. Regarding RegretNet outperforming AMenuNet 4 out of 6 times, the absolute difference is small, and there are 2 out of 6 times when the reverse is true. I am not even sure the result is statistically significant. At the very least, I advise including metrics reported in the respective papers of EquivariantNet, RegretNet, and CITransNet, for the settings that intersect with your paper. Also, I advise including an ablation where attention layers in AMenuNet are replaced with fully-connected layers. If this performed worse (although either outcome would be interesting), it would also show that attention > fully-connected. In conjunction, these two changes (and an explicit discussion), would be a sufficient alternative. 2. This is reasonable, thank you for clarifications! I suggest adding this to the appendix. 3. This is very interesting, thanks! I also wonder about a smaller setting like 2x2 but I leave this consideration with the authors. 4. and 6. I see, thank you for the clarification! 5. I agree with every word in your response. I understand why distinguishing agents is essential in asymmetric settings. But I still think this makes the architecture non-permutation-equivariant, and this has not been addressed in the rebuttal. I will elaborate. Consider a setting with two participants. The first has an id "1" and its valuation (for all items) is sampled from U[0, 1]. The second has an id "2" and its valuation is sampled from U[0, 2]. Yes, not providing an id would not learn an optimal auction, but it would be symmetric, and the architecture (let's say, an attention layer) -- permutation-equivariant. If we encode positions "1" and "2" before the attention layers, the optimal auction is in the space of solutions so it can be learned, but the architecture is no longer permutation-equivariant. Do we suddenly get the best of both worlds when we call "1" and "2" not positions but ids? No, we learn a mechanism that is not symmetric. We effectively encoded positions. It does not matter that technically the attention layers are permutation-equivariant. In the prior literature, permutation-equivariant architecture == symmetric mechanisms, and non-permutation-equivariant architecture == asymmetric mechanisms. So it is confusing to me that the method is claimed to be permutation-equivariant while learning asymmetric mechanisms. On top of that -- what's so great about permutation equivariance in this case? The paper cites Quin et al. [2022], but they "consider the popular additive valuation and symmetric valuation setting". The EquivariantNet also only experiments with symmetric settings. And the property to generalize (lines 340-341) is not due to equivariance -- NLP routinely applies positional encodings (order matters) that work for sentences of varying lengths. The paper treats attention-based architecture as equivariant regardless of the positional (or id) encoding. So answering the question of what's so great about equivariance is actually not easy -- is it the equivariance or the attention layers that are great? It could be answered with ablations with fully-connected layers (like RegretNet) and deepset layers (like EquivariantNet) since both don't use attention and only the latter is equivariant. But this might be too deep of a rabbit hole to prove something about the property that AMenuNet does not even possess (as argued above). To be clear, this is a nitpick, but the paper seems to make a big deal out of equivariance, and I do not think it should. To conclude, I still believe the paper oversells and the soundness should be improved. The rebuttal did not change my opinion. I leave my score unchanged and I leave it to ACs to decide how important these concerns are. But feel free to let me know if I missed/misinterpreted something. --- Reply to Comment 1.1.1: Title: Thank you for your comments! Comment: Thank you for your comments! We will clarify the following concerns you raised. **Q7: The addtional experiments.** > But they are using similar attention-based architectures. And the paper does compare with RegretNet (as it should) which uses a "weaker" architecture. We agree that CITransNet and AMenuNet share similar attention-based architectures. This is the reason why we conducted experiments to compare CITransNet and AMenuNet in Table 1(a). As for similar attention-based architectures in classical auctions, we addtionally conduct supplementary experiments in CITransNet, where we treat the IDs as discrete contexts. We report CITransNet's revenue when the regret is slightly under 5e-4 in the following table, where the other results are taken from our paper: | Settings | DSIC? | 2x5(C) | 3x10(C) | 5x5(C) | 3x1(D) | 1x2(E) | 1x2(F) | | ---------- | ----- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | | Optimal | Yes | - | - | - | 2.7490 | **9.7810** | **0.1706** | | CITransNet | No | **2.3788** | **5.9191** | **3.4759** | **2.7541** | 9.7551 | 0.1691 | | RegretNet | No | 2.3390 | 5.5410 | 3.4260 | 2.7264 | 9.7340 | 0.1732 | | AMenuNet | Yes | 2.2768 | 5.5896 | 3.3916 | 2.7382 | 9.6219 | 0.1701 | We observe that the attention-based CITransNet consistently outperforms both RegretNet and AMenuNet across all scenarios with unknown optimal solutions. Additionally, CITransNet approximates the optimal solution well in the known cases. These outcomes underscore the revenue advantages of the attention module in CITransNet when contrasted with the fully connected neural network-based RegretNet. Furthermore, the results reinforce the conclusion that AMenuNet's zero regret comes at the expense of revenue. However, it's important to emphasize that the advantages of AMenuNet compared to CITransNet and RegretNet are not primarily focused on revenue. Instead, AMenuNet's distinctive strength lies in its inherent capacity to ensure Dominant Strategy Incentive Compatibility (DSIC) by design. > Regarding RegretNet outperforming AMenuNet 4 out of 6 times, the absolute difference is small, and there are 2 out of 6 times when the reverse is true. I am not even sure the result is statistically significant. As we described at the beginning of experiment section, all of our presented results are the average of the results of experiments in five different seeds. > At the very least, I advise including metrics reported in the respective papers of EquivariantNet, RegretNet, and CITransNet, for the settings that intersect with your paper. In our paper, we have already included the revenue reported in RegretNet and CITransNet for the intersect settings. > Also, I advise including an ablation where attention layers in AMenuNet are replaced with fully-connected layers. We agree that conducting an ablation study to investigate the revenue benefits of attention layers compared to fully-connected layers is indeed interesting. However, we believe this experiment is better suited for inclusion in the Appendix rather than the main body. This is because our primary motivation behind employing attention-based layers extends beyond merely enhancing revenue performance. Our goal is to establish permutation-equivariance and the ability to generalize to auctions with varying scales, ultimately enhancing scalability. These specific attributes are not satisfied by fully-connected layers. For the ablation experiment, we compare AMenuNet with AMenuNet-FCN, wherein we replace the transformer-based interaction modules with a multi-layer fully connected neural network. We set the number of layers in AMenuNet-FCN to $4$, each with $128$ hidden nodes. | Settings | DSIC? | 2x5(C) | 3x10(C) | 5x5(C) | | ---------- | ----- | ---------- | ---------- | ---------- | | AMenuNet | Yes | **2.2768** | **5.5896** | **3.3916** | | AMenuNet-FCN | Yes | 2.1333 | 5.0161 | 3.3657 | We can observe that AMenuNet consistently outperforms AMenuNet-FCN across the listed scenarios. Additionally, the parameter count of AMenuNet is lower than AMenuNet-FCN's, further underscoring the revenue advantages of a transformer-based architecture. We will consider adding the ablation experiments in the Appendix. --- Reply to Comment 1.1.2: Title: About permutation-equivariance. Comment: **Q8: About permutation-equivariance.** There is a disagreement over the exact meaning of permutation-equivariance. The key is **whether the permutation operator should rearrange bidder and item IDs or contexts**. We must highlight that our paper does not treat the IDs as fixed positional encoding. Instead, **as described in Definition 4.2, we will also permute the IDs when we permute the bids.** **--Q8.1: The definition of permutation-equivariance.** > The paper treats attention-based architecture as equivariant regardless of the positional (or id) encoding. Your understanding of permutation-equivariance assumes that the permutation operator rearranges bids alone. However, this interpretation mainly fits symmetric auction situations. Here, specific bidder or item IDs aren't crucial, so there's no need to account for their permutation. On the other hand, our concept of permutation-equivariance, formally laid out in Definition 4.2, has a broader scope. It encompasses scenarios where public IDs or contexts of bidders and items have significant impacts. Therefore, the permutation process should also encompass shuffling these IDs. This definition aligns with Duan et al. [2022] and Qin et al. [2022], which we'll delve into later. **--Q8.2: About the example of 2 bidders and 1 item, with $v_1 \sim U[0, 1]$ and $v_2 \sim U[0, 2]$.** To clarify, in scenarios considering IDs, AMenuNet treats the ID "1" as "2" in its inputs. If we swap the bidders, the order of their IDs *will also switch*, because IDs hold significance as public information. Therefore, the outcome of allocation and payment will maintain the same permutation as the input bids and IDs. **--Q8.3: Permutation-equivariance in previous literatures.** > In the prior literature, permutation-equivariant architecture == symmetric mechanisms, and non-permutation-equivariant architecture == asymmetric mechanisms. We disagree with the given statement. Symmetric mechanisms can be considered a specific case of permutation-equivariance. Permutation equivariance can also apply to asymmetric mechanisms as long as the permutation operates on bidder IDs (or contexts). In the existing literature, researchers have introduced the concept of permutation equivariance to cover asymmetric auctions by permuting IDs (or contexts). For instance, Duan et al. [2022] define permutation-equivariance in Remark 3.1 as follows: > [Remark 3.1 of Duan et al. [2022]] We say an auction mechanism $(g^w, p^w)$ is permutation-equivariant if for any two permutation matrices $\Pi_{n}\in \{0,1\}^{n\times n}$ and $\Pi_{m}\in \{0,1\}^{m\times m}$, and any input (including bids $b \in \mathbb{R}^{n\times m}$, bidder-contexts $x \in \mathbb{R}^{n\times d_x}$ and item-contexts $y \in \mathbb{R}^{m\times d_y}$), we have $g^w(\Pi_{n}b\Pi_{m}, \Pi_n x, \Pi_m^T y)=\Pi_{n}g^w(b,x,y)\Pi_{m}$ and $p^w(\Pi_{n}b\Pi_{m}, \Pi_n x, \Pi_m^T y)=\Pi_{n}p^w(b,x,y)$ Furthermore, Qin et al. [2022] also incorporate the concept of permutation applied to the bidder and item IDs or contexts, despite their focus on additive and symmetric valuations. This is evident in their definitions of bidder orbit averaging $\mathcal{Q}_1$ and item averaging $\mathcal{Q}_2$: > [Qin et al. [2022]] The bidder averaging $\mathcal{Q}_1$ and the item averaging $\mathcal{Q}_2$ acting on the allocation rule $g$ and the payment rule $p$, respectively, are as below, > $$\mathcal{Q}_1{g}(v,x,y)=\frac{1}{n!}\sum _{\sigma _n\in S_n}\sigma _n^{-1}g(\sigma_n v,\sigma_n x,y), \mathcal{Q} _1{p}(v,x,y)=\frac{1}{n!}\sum _{\sigma _n\in S_n}\sigma _n^{-1}p(\sigma _n v,\sigma _n x,y),$$ > $$\mathcal{Q}_2{g}(v,x,y)=\frac{1}{m!}\sum _{\sigma_m\in S_m}g(v \sigma_m,x,y\sigma_m)\sigma_m^{-1}, \mathcal{Q}_2{p}(v,x,y)=\frac{1}{m!}\sum _{\sigma_m\in S_m}p(v\sigma_m,x,y\sigma_m).$$ Such definition clearly shows the application of permutation to bidder contexts $x$ and item contexts $y$. In summary, it is important to recognize that permutation-equivariance goes beyond symmetric auctions. It can be extended to include asymmetric auctions by employing permutations on bidder IDs or contexts. **References** [1] Zhijian Duan, Jingwu Tang, Yutong Yin, Zhe Feng, Xiang Yan, Manzil Zaheer, and Xiaotie Deng. A context-integrated transformer-based neural network for auction design. ICML 2022. [2] Tian Qin, Fengxiang He, Dingfeng Shi, Wenbing Huang, and Dacheng Tao. Benefits of permutation equivariance in auction mechanisms. NeurIPS 2022.
Summary: This paper introduces AmenuNet, a scalable NN for the AMA design, which ensures DSIC and IR. And the experiments demonstrate the effectiveness of AMenuNet, including its revenue, scalability, and out-of-setting generalizability. Strengths: Originality: 1.Proposes a new automated auction design method: The paper proposes a new automated auction design method that uses a neural network to construct AMA parameters, thereby improving scalability and revenue performance. 2.Combines deep learning and game theory: The paper combines deep learning and game theory to propose a new method for solving the DSIC problem in multi-item auctions, which is a novel approach. Quality: 1.Experimental results demonstrate the effectiveness of the proposed method: The paper demonstrates the effectiveness of the proposed method in extensive experiments, including performance in different environments and scalability in large auctions. 2.Theoretical proof ensures the DSIC property of the auction: The paper ensures the DSIC property of the auction through theoretical proof, thereby improving the quality of the auction. Clarity: 1.Clear paper structure: The paper has a clear structure, strong logic, and is easy to understand. 2.Detailed experimental section: The experimental section of the paper is detailed, including experimental settings, results, and analysis, making it easy for readers to understand and reproduce. Importance: 1.Contribution to the field of automated auction design: The proposed method in the paper makes an important contribution to the field of automated auction design, helping people design high-revenue auction mechanisms. 2.Inspirational significance for the combination of deep learning and game theory: The paper explores the combination of deep learning and game theory, providing inspiration for related research in the field. Weaknesses: 1.Lack of comparison with existing methods: The paper does not compare the proposed method with some existing industrial methods, such as DNA, NMA, which makes it difficult to evaluate the novelty and effectiveness of the proposed method in industry. 2.Limited applicability: The proposed method is only verified on small-scale data sets, lacking demonstrations on large-scale industrial scenarios. 3.Expression problem: some typos need to be optimized. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1.Lack of modeling of full contextual externalities. 2.Lack of comparison with WVCG, NMA. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! We appreciate your positive feedback and we will address the questions you listed. **Q1: About the comparison with industrial methods.** > The paper does not compare the proposed method with some existing industrial methods, such as DNA, NMA, which makes it difficult to evaluate the novelty and effectiveness of the proposed method in industry. > Lack of comparison with WVCG. In case of misunderstanding, we refer DNA as the method proposed in the paper "Neural auction: End-to-end learning of auction mechanisms for e-commerce advertising", NMA as the one proposed in the paper "NMA: Neural Multi-slot Auctions with Externalities for Online Advertising" and WVCG as the one presented in the paper "Truthful learning mechanisms for multi-slot sponsored search auctions with externalities." It's worth noting that all of these methods focus on a different type of auction problem, namely, advertising auction. Advertising auctions are single-parameter auctions, which involve bidders placing a single parameter as their valuation. The DSIC characterization of such auctions is based on Myerson Lemma [Myerson, 1981]. In contrast, our AMenuNet is specifically designed for more complex multi-parameter auctions, where bidders bid multiple parameters as their valuations. Such auctions introduce greater complexity due to the lack of DSIC characterizations, making them distinct from single-parameter auction settings. Therefore, while DNA, NMA and WVCG are relevant for advertising auctions, the nature of the auction setting addressed by AMenuNet is inherently different. **Q2: About the scalability.** > The proposed method is only verified on small-scale data sets, lacking demonstrations on large-scale industrial scenarios. As highlighted earlier in Q1, our primary focus is on multi-parameter auctions, a more intricate setting than industrial advertising auctions. The complexity inherent to multi-parameter auctions poses challenges for scaling to large industrial scenarios. Notably, the auction scales we've considered in this paper already exceed those of prior AMA-based methods, such as Sandholm and Likhodedov [2015] and Curry et al. [2022], and are comparable to previous RegretNet-based approaches. **References** [1] Tuomas Sandholm and Anton Likhodedov. Automated design of revenue-maximizing combinatorial auctions. Operations Research, 63(5):1000–1025, 2015 [2] Michael Curry, Tuomas Sandholm, and John Dickerson. Differentiable economics for randomized affine maximizer auctions. IJCAI 2023.
Summary: This paper proposes a new architecture for learning auctions that are DSIC (but not necessarily revenue optimal) auctions. The authors' main contribution lies in a transformer-based permutation equivariant architecture designed to calculate the allocations, weights, and boosts variables utilized by AMA-based approaches. Strengths: The primary strength of this novel architecture resides in its capacity to effectively handle the following aspects: 1. **Contextual and non-contextual auctions:** The architecture demonstrates the ability to accommodate both contextual and non-contextual auction settings, allowing for a broader range of applications. 2. **Equivariance for varying auction sizes:** The architecture exhibits permutation equivariance, enabling it to handle auctions of different sizes without requiring significant modifications. 3. **Better scalability:** In comparison to existing approaches, the proposed architecture exhibits improved scalability, providing a more efficient and scalable solution for auction learning tasks. Weaknesses: This paper shares similarities with the work of Curry et al. [2022] on Differential Economics for Randomized AMA auctions. However, a key distinction is that the allocation menus in this paper are parameterized by a neural network, in contrast to directly being parameterized through autograd variables. Additionally, the proposal of contextual auction design through the transformer architecture has been previously suggested by Dual et al. [2022]. Consequently, the contributions of this paper may not be considered highly novel. Nevertheless, the authors demonstrate improved revenue performance over existing approaches, as evidenced in Table 2. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In the context of classical auctions, is there any intuition for why optimizing the neural network that outputs an allocation works better than directly optimizing the allocation variables? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! We will address the questions you have listed. **Q1: About the contribution** > This paper shares similarities with the work of Curry et al. [2022] on Differential Economics for Randomized AMA auctions. However, a key distinction is that the allocation menus in this paper are parameterized by a neural network, in contrast to directly being parameterized through autograd variables. Additionally, the proposal of contextual auction design through the transformer architecture has been previously suggested by Dual et al. [2022]. Consequently, the contributions of this paper may not be considered highly novel. In contrast to the work of Curry et al. [2022] (Lottery AMA), our contribution lies in introducing a novel deep neural network-based AMA method. The underlying neural network architecture empowers our method to handle classical and contextual settings, offering flexibility and adaptability. Furthermore, our method enjoys several advantages beyond revenue maximization. Its inherent permutation equivariance ensures consistent outcomes regardless of the order of bidders or items, enhancing its practicality. Additionally, the scalability of our approach is noteworthy, as the neural network can effectively process input data of various sizes, making it applicable to a wide range of auction scenarios. Compared to the work of Duan et al. [2022], our AMA-based method can ensure Dominant-Strategy Incentive Compatibility (DSIC) by construction. This property enhances the reliability and trustworthiness of the proposed mechanism, providing bidders with strong incentives to reveal their true valuations. **Q2: About the benefits of AMenuNet with respect to lottery AMA.** > In the context of classical auctions, is there any intuition for why optimizing the neural network that outputs an allocation works better than directly optimizing the allocation variables? We have extensively discussed the reasons for AMenuNet's superior performance over Lottery AMA in our general response (Q2). In summary, the key factors contributing to AMenuNet's advantage are: 1. AMenuNet adds more inductive bias by capturing the correlation of AMA parameters through the neural network. 2. AMenuNet's over-parameterization from the neural network offers a better optimization landscape. --- Rebuttal Comment 1.1: Title: Response to the rebutatal Comment: 1. Can you please elaborate on what's novel in the neural network-based AMA method - is this a simple change in the output layer to output menus or something different? 2. In the case of non-contextual auctions, I understand that the inputs are fixed for a given setting. In this case, are you claiming/ observing that optimizing a parameterized function to generate a set of outputs is easier than directly optimizing the outputs itself? 3. Regarding permutation equivariance, how do you achieve permutation symmetry concerning the bids once you have the allocations and boosts? Can you use similar techniques with Lottery AMA parameters as well? --- Reply to Comment 1.1.1: Title: Thank you for your further comments! Comment: Thank you for your further comments! We will address the questions you have listed. **C1** > 1. Can you please elaborate on what's novel in the neural network-based AMA method - is this a simple change in the output layer to output menus or something different? First, we need to highlight that, to the best of our knowledge, we are the first to incorporate a neural network into AMAs. In contrast, previous literature directly provides and optimizes these AMA parameters without the usage of neural networks. Consequently, their methods are limited to classical auctions. In our paper, we are not "simply changing the output layer to output menus". Instead, we are the first to use the neural network to output all the AMA parameters (the allocation menu, weights, and boosts). As a result, our method can handle both classical and contextual auctions. Another novelty is the construction of our attention-based neural network, AMenuNet. The weights of AMenuNet are not affected by the number of bidders and items. Therefore, AMenuNet can generalize to auction settings with a different number of bidders and items, enhancing its scalability. **C2** > 2. In the case of non-contextual auctions, I understand that the inputs are fixed for a given setting. In this case, are you claiming/ observing that optimizing a parameterized function to generate a set of outputs is easier than directly optimizing the outputs itself? In non-contextual auctions, we observe in experimental results that optimizing the neural network is easier than optimizing the AMA parameters. We have discussed the possible reasons in Q2 of our general response. In short, AMenuNet provides more inductive bias by capturing the correlation between different AMA parameters through the neural network. Moreover, AMenuNet's over-parameterization potentially offers a better optimization landscape. **C3** > 3. Regarding permutation equivariance, how do you achieve permutation symmetry concerning the bids once you have the allocations and boosts? Can you use similar techniques with Lottery AMA parameters as well? As shown in Definition 4.2, if we permute the bidders and items, **both the bids and IDs (or contexts) of all bidders and items will be permuted**. Since **the IDs (or contexts) are the inputs of AMenuNet**, once we permute them, the output of AMenuNet will also be permuted in the same way due to its permutation-equivariance architecture. Our definition of permutation-equivariance is also used in Duan et al. 2022 and Qin et al. 2022, and we have discussed it in detail in Q8 of our response to Reviewer 1hHm. For the Lottery AMA, it directly provides the allocations and boosts regardless of the permutation of IDs. Therefore, the same technique cannot be applied to the Lottery AMA. **References** [1] Zhijian Duan, Jingwu Tang, Yutong Yin, Zhe Feng, Xiang Yan, Manzil Zaheer, and Xiaotie Deng. A context-integrated transformer-based neural network for auction design. ICML 2022. [2] Tian Qin, Fengxiang He, Dingfeng Shi, Wenbing Huang, and Dacheng Tao. Benefits of permutation equivariance in auction mechanisms. NeurIPS 2022.
Summary: Revenue maximizing strategyproof auction design with multidimensional types has proven to be extremely challenging. The lack of theoretical progress even in simple problem instances has motivated the use of machine-learning-based techniques to approximately learn high-performing auctions. One approach, typified by “RegretNet” and its followup works, involves the use of neural networks as function approximators to directly represent auction mechanisms — the training process optimizes revenue and strategyproofness. However, although the mechanisms learned by these neural network based approaches are qualitatively good and likely near to the true optimal mechanism, the enforcement of the strategyproofness constraints is not perfect — a significant limitation. Another line of work searches within some restricted class of mechanisms, all of which are guaranteed to be strategyproof, for one that performs well. The current work builds on this latter approach. Like some previous work, they focus on the class of affine maximizer auctions, which are guaranteed to be strategyproof. However, previous work directly optimized the parameters and possible outcomes of the auction. They instead treat the mechanism itself as the outcome of a neural network. Crucially, this neural network does not see the bids as input — at most, it sees some informative “context” features — so strategyproofness is preserved. Also, even for auctions without context, training using this architecture seems to perform better than optimizing the parameters directly. The neural network uses a transformer architecture, which comes with some advantages — it allows for permutation-equivariance, a useful property which is satisfied by optimal auctions when the bidders are anonymous. In experiments, the authors find that their method performs very well -- getting higher revenue than previous strategyproof approaches and even performing comparably to the unconstrained neural network approaches in some cases. Strengths: The experiments are done quite well and follow standard methodology, the baselines are well-chosen, the method is explained clearly, the results show a clear improvement on existing work, and the technique opens up capabilities in new settings. Weaknesses: There have since been many improvements on RegretNet in addition to CITransNet, with and without contexts. It could be interesting to compare to some of these as well. The section describing the auction architecture is extremely dense. It's possible this had to be compressed a lot for the NeurIPS page limit, but I found it hard to follow. I ended up looking at the diagram+code and found this much easier to understand than the written part. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Where do you get the reported revenues for the other methods? Are these experiments you ran yourself or taken from another paper? If reproduced yourself, how do they compare to reported results in other papers? Your table 1 has some methods in bold even when RegretNet/CITransNet outperform them. I admit it does say in the caption that bold means “best among DSIC methods”, and the DSIC methods are separated by bars, but when reading the paper, the first thing I did was immediately look at that table, so I still found it confusing. I think it would be good to find a way to make this even more obvious. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors adequately discuss the limitations of their approach (the main one being that it deals with a restricted class of auctions, unlike the RegretNet approaches). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback! We value your affirmation and we will address the questions and concerns you have listed. **Q1: About the comparison of RegretNet-based approaches.** > There have since been many improvements on RegretNet in addition to CITransNet, with and without contexts. It could be interesting to compare to some of these as well. While introducing more RegretNet-based methods into our experiments could be interesting, our primary focus is comparing our method with other DSIC (Dominant-Strategy Incentive-Compatible) approaches. We use the comparison with RegretNet and CITransNet to give extra information, not to make them the main focus. We suggest reading the detailed discussion in Q1 of our general response. **Q2: About the results of baselines.** > Where do you get the reported revenues for the other methods? Are these experiments you ran yourself or taken from another paper? If reproduced yourself, how do they compare to reported results in other papers? For the reported revenues of other methods, we used their previously reported results for settings that appeared in other papers. For settings not found in prior work, we conducted our own experiments. You can find detailed descriptions of hyperparameter settings in Appendix B. **Q3: About the presentation of Table 1.** > Your table 1 has some methods in bold even when RegretNet/CITransNet outperform them. I admit it does say in the caption that bold means “best among DSIC methods”, and the DSIC methods are separated by bars, but when reading the paper, the first thing I did was immediately look at that table, so I still found it confusing. I think it would be good to find a way to make this even more obvious. Thank you for your feedback! We will make the presentation of Table 1 more clear in the revision. --- Rebuttal Comment 1.1: Title: response Comment: Thanks for answering all my questions here.
Rebuttal 1: Rebuttal: We thank all reviewers for their careful comments and constructive suggestions. Here are our responses to some common questions in the reviews. **Q1: About the comparative experiments on AMenuNet and RegretNet-based methods.** Our primary focus in the experiments is to compare AMenuNet with other DSIC (Dominant-Strategy Incentive-Compatible) approaches, such as Lottery AMA and Item-Myerson. While RegretNet-based methods like CITransNet and RegretFormer cover a broader range of mechanisms, they do not guarantee DSIC. These methods might achieve higher revenue levels but are at the cost of sacrificing the DSIC property. In contrast, our DSIC approach strictly ensures that the mechanisms are DSIC, resulting in a more limited range than Regret-based methods. Despite this narrower range, the primary advantage of AMenuNet over RegretNet-based approaches lies in its ability to ensure DSIC, prioritizing truthful bidding over revenue maximization. Nevertheless, we still conducted the comparison experiments involving the non-DSIC RegretNet. These experiments serve as supplementary insights rather than central facets of our study. We do so to highlight AMenuNet's capability to achieve substantial revenue while maintaining DSIC. **Q2: About why AMenuNet performs better than Lottery AMA.** AMenuNet holds several advantages over Lottery AMA: 1. Handles contextual settings, adapting to diverse auction environments where bidder valuations may depend on contextual information. 2. Permutation equivariant, ensuring consistent outcomes regardless of the order of bidders or items. 3. Demonstrates generalization capabilities, performing well in auctions with varying scales and sizes. Although AMenuNet and Lottery AMA possess the same capabilities in classical auctions given identical menu sizes, our empirical experiments demonstrate that AMenuNet excels over Lottery AMA. Several factors potentially contribute to this superiority: 1. **Benefits of inductive bias**: While the Lottery Allocation Mechanism (Lottery AMA) calculates the allocation menu, bidder weights, and boosts as separate entities, AMenuNet takes a different approach. AMenuNet leverages an underlying neural network to compute these parameters, thereby capturing the intricate interdependencies among them. This characteristic of AMenuNet is an additional inductive bias. Adding more inductive biases usually improves the generalization ability of machine learning models [1,2]. 2. **Benefits of over-parameterization**: The parameters of Lottery AMA are just the allocation menu, bidder weights, and boosts. In contrast, AMenuNet is over-parameterized with the underlying transformer-based neural network. Extensive research in deep learning has consistently demonstrated that over-parameterization offers several benefits, including improved optimization landscapes [3], better generalization [4], and less sensitivity to parameter initialization [5]. As a result, the over-parameterization of AMenuNet makes it easier to optimize than Lottery AMA, leading to better revenue performance in practice. **References** [1] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [2] Mitchell, Tom M. "The need for biases in learning generalizations." (1980). [3] Buhai, Rares-Darius, et al. "Empirical study of the benefits of overparameterization in learning latent variable models." International Conference on Machine Learning. PMLR, 2020. [4] Allen-Zhu, Zeyuan, Yuanzhi Li, and Yingyu Liang. "Learning and generalization in overparameterized neural networks, going beyond two layers." Advances in neural information processing systems 32 (2019). [5] Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." ICLR 2019.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation
Accept (poster)
Summary: This paper proposes a nove approach for creating an infinite number of diverse panoramic environments conditioned on text for visual-language-navigation (VLN). Specifically, the authors use stable diffusion with captions of images from a existing dataset to generate in-door panoramic views. Recursive inpainting is used to ensure the consistency across views. The authors also present pretraining and finetuning strategies that are designed to maximize the benefits of the synthesized data, and they demonstrate its effectiveness across various benchmarks. Strengths: 1. This paper demonstates that strong generative models (like Stable Diffusion) are potential to augment traning data for robotics learning (VLN in this paper). Similar ideas could be adopted in other areas like autonomous driving. 2. The experiments over different benchmarks show that incorporating virtual panoramic scenes generated by Stable Diffusion is useful. Weaknesses: 1. There are no examples for mPLUG captioning. It would be great to contain several samples of {previous_img, previous_caption, generated_img, generated_caption}. With such kind of samples can reader better understand how mPLUG creates suitable captions for generated images with different layouts and appearances. 2. This method creates novel data with different appearance, scene layouts, and instructions compared to original data. And experiments show that incorporating these three factors together is helpful. However, It is not very clear which specific part or all parts have played a helpful role. If novel layouts and instructions do not help, I believe incorporating ControlNet/GLIGEN can provide novel appearances with the same arrangements, no need for finetuning mPLUG and captioning, making the overall method simpler and easier to follow. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In this paper, panoramic generation is conditioned on captions from existing data. However, it is also possible to create panoramic views in an unconditioned manner. Then mPLUG can also create instructions over this data. Will such data help VLN training? 2. Is it necessary to have novel layouts with instructions, as I mentioned in Weakness (2)? Can we just use ControlNet/GLIGEN? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed about the limitations and potential impact in the supp. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer PpHD** > Q1: Qualitative examples of generated data. We include one panorama-trajectory-instruction pair in the general response pdf. As shown in the Figure, our PanoGen environment is more diverse in appearance, but still maintains good alignment in semantics with the original environments. Besides, we find that the instructions generated with our mPLUG based speaker are better in detail compared with the baseline EnvDrop speaker, which clearly indicates that the agent should stop in front of the sink inside the bathroom. > Q2: Ablations on instructions and layouts. * To demonstrate the effectiveness of generating new instructions for PanoGen environments, we experiment with pre-training with PanoGen environments and the original instructions. As shown in the Table below, without the instructions generated for the PanoGen environments, we observe 0.8% drop in SR, and 2.2% drop in SPL. * To demonstrate the effectiveness of layouts, we compare our approach with EnvEdit[1], which only edits the appearance of the original environments by synthesizing based on semantic segmentation (similar idea as using GLIGEN). As shown in the Table below, training with PanoGen environments further improves training with EnvEdit environments by 1.4\% in SR and 1.1\% in SPL. | Model | SR | SPL | |----------------|------|------| | PanoGen | 74.2 | 64.3 | | - instructions | 73.4 | 62.1 | | - layouts | 72.8 | 63.2 | > Q3: Unconditional panorama generation. Due to time limitations, we cannot train a model to generate a large number of new panoramas unconditionally. Instead, we experiment with randomly switching existing panoramas in PanoGen environments to mimic the unconditional panorama generation process. Then, we use our mPLUG based speaker to generate instructions for these environments. As shown in the Table below, training with this data will decrease the performance by 1.2\% in SR and 1.4\% in SPL. We attribute the performance decrease to the large inconsistency between consecutive viewpoints when using unconditional generation. Conditioning the generation of text captions can implicitly maintain some consistency between consecutive viewpoints. | Model | SR | SPL | |------------------|------|------| | PanoGen | 74.2 | 64.3 | | Random switching | 73.0 | 62.9 | [1] EnvEdit: Environment Editing for Vision-and-Language Navigation. In CVPR 2022. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal by Authors Comment: I appreciate the clarification provided. I still have a concern. It appears that when training the model using both original instructions and mPLUG-synthesized instructions, the resulting difference in success rate is only 0.8%. While I am not an expert in VLN, based on my previous experience in robotics, I perceive this difference to be relatively insignificant. In order to determine the true contribution of including mPLUG, can you conduct additional experiments using different seeds, and report the mean and standard deviation of the success rate, as this will provide a more comprehensive understanding of how the inclusion of mPLUG impacts the overall performance of the model. --- Reply to Comment 1.1.1: Comment: We're glad you appreciate our clarifications. As you suggested, we perform 4 more tests with seeds (1, 42, 1234, 12345). The success rate for each run is shown in the Table below. The mean success rate and std of our PanoGen are 73.59/0.530, while the mean and std of the model trained without synthetic instructions are 72.24/0.804. We do a t-test over the two distributions, which suggests that the two distributions are significantly different with a p-value of 0.008 (i.e., p < 0.01). | Seed | PanoGen | -instructions | |-------|-------------|---------------| | 0 | 74.2 | 73.4 | | 1 | 73.05 | 72.54 | | 42 | 73.27 | 71.22 | | 1234 | 73.31 | 72.03 | | 12345 | 74.12 | 71.99 | | Mean | 73.59 | 72.236 | | STD | 0.530424358 | 0.803511045 | We hope these experiments help address your last concern as you mentioned, and hope you could update your score if you are satisfied with the response. Thanks again for your time!
Summary: This paper proposed a data augmentation method named PANOGEN, which generates panoramic environments. The proposed method employs a recursive image inpainting technique to generate coherent panoramic environments and incorporates these augmented environments in both the pre-training and fine-tuning stages. Experimental evaluations demonstrate the effectiveness of the proposed method in enhancing performance in VLN tasks. Strengths: 1. PANOGEN generates new environments without human annotation, which is novel and solves the problem of environment scarcity. 2. The paper is well-written and easy to follow. 3. The experiment is sufficient and ablation studies are strict. Weaknesses: 1. It seems that it lacks a discussion about the selection of the image caption model since the caption is a critical section. And I am wondering how it would be to directly generate the panorama view from the instruction. 2. More discussion on the effectiveness of panorama replacement in fine-tuning is needed. I'm still confused about why using the generated environment is better than the original one because of the larger data. 3. It would be better if the visualization results were made into a panorama-instruction format so that readers can compare easily, and there are no failure cases. 4. I could not find any examples of generated instructions or a measure of the quality of the generation, thus it is hard to evaluate the performance of the Speaker. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It would be better to indicate in Table that DUET-CLIP is the baseline to make it easier to make comparisons. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer wyXn** > Q1: Discussion of selection of image captioning model, and clarification for generating panorama from text. **Image captioning model choice.** We choose BLIP-2 for image captioning since it has sota/good zero-shot performance on multiple image captioning benchmarks (Table1 in [BLIP-2 paper](https://arxiv.org/pdf/2301.12597.pdf)). Besides, we compare the captions generated by OFA[1] and BLIP-2, and manually go through 50 caption outputs on the R2R seen environments to decide to use BLIP-2. **Panorama generation from text clarification.** To generate panoramas from text captions, we propose a novel image inpainting approach to gradually generate the panorama from single images (L148-L171). Specifically, each panorama is discretized into 36 views, and we generate captions for each view. Then, we choose one view in the middle elevation as the starting point, and generate the image given the caption. We gradually rotate the image and inpaint the unseen observation recursively to generate the full panorama from multiple text captions. > Q2: Effectiveness of panorama replacement in fine-tuning. Due to the limited available training environments for VLN, the previous paper shows that the agent tends to overfit the low-level appearance features of the environment [2]. Thus, training by randomly replacing the observation with PanoGen environments brings more diverse environments during training, and can avoid overfitting and improve generalization (L32-L51). > Q3: Qualitative example of panorama-instruction pairs. We include one qualitative example of panorama-instruction pair in the general response pdf. Our PanoGen environments contain similar semantics as the original environments, while being much more diverse. > Q4: Evaluation of speaker. **Automatic BERTScore evaluation over instructions.** We calculate the BERTScore of instructions generated by our mPLUG based speaker and the EnvDrop speaker. Specifically, we use both speakers to generate the instructions on the validation unseen set of R2R data. We use Bart-base as the base model to calculate the bert score. Our speaker achieves a BERTScore of 71.8, while EnvDrop speaker achieves a BERTScore of 70.5. **Qualitative example.** Besides the evaluation with BERTScore, we also include the instructions generated with EnvDrop speaker and our speaker in the general response pdf. We find that our speaker tends to generate instructions with more details, while the EnvDrop instructions only mention “walk to the right side of the room”. > Q5: Emphasize DUET-CLIP baseline. Thanks for pointing out. We will add “DUET-CLIP is considered as our baseline approach.” in the Table captions. [1] OFA: Unifying Architectures, Tasks, and Modalities through a Simple Sequence-toSequence Learning Framework. In ICML 2022. [2] Diagnosing the Environment Bias in Vision-and-Language Navigation. In IJCAI 2020. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed reply, I have no other concerns. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and positive engagement! We're glad that our response addressed all your questions.
Summary: This paper presents a new data augmentation method for VLN tasks. The proposed method first generates captions for each view and then recursively generates new images to ensure the consistency among multi-views. The authors demonstrate two ways to utilize the newly generated panorama on three benchmarks: R2R, CVDN, R4R. Better results are achieved compared to the previous SoTA. Strengths: 1. The paper is well-written and easy to follow, with clearly stated objectives and technical descriptions. 2. Strong performances are achieved on three different VLN tasks. 3. Comprehensive and insightful ablation study is provided. 4. Quantitative and qualitative results show the importance of multi-view consistency to the VLN tasks. Weaknesses: 1. Will the generated panorama in two consecutive steps be largely different? If yes, will it be better if reduce the difference across steps? 2. What are the impacts on high-level VLN tasks, such as REVERIE? 3. It could be better to make comparisons to previous data augmentation methods, such as EnvDrop and PREVALENT. ====== after rebuttal ============ Thanks for the authors' responses. All my concerns have been well addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See details in weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See details in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer Wxjf** > Q1: Consistency between panorama of two steps. We show one qualitative analysis in the general response pdf. We could observe that though we didn’t explicitly improve consistency across steps, the general semantic information is still reasonable across steps. For the given example, we could always observe that the navigation happens in a bedroom with a sofa and an outside view of the sea. > Q2: REVERIE performance. We show our approach’s performance on REVERIE in the following Table. Both pre-training with our speaker data and fine-tuning with observation replacement achieve a large performance improvement (4.6% absolute improvement in SR) than the CLIP baseline on the REVERIE dataset. | Model | SR | SPL | RGS | RGSPL | |-----------------|-------|-------|-------|-------| | DUET | 46.98 | 33.73 | 32.15 | **23.03** | | DUET-CLIP | 46.58 | 34.14 | 31.70 | 22.89 | | PanoGen+mPLUG | 49.22 | 33.44 | 32.80 | 22.45 | | PanoGen+Replace | **51.18** | **34.99** | **33.26** | 22.99 | > Q3: Comparison with EnvDrop and Prevalent. * **Comparison with PREVALENT**. Our approach is compatible with the existing instruction augmentation approach PREVALENT[1]. As we described in L229-L231, we follow our baseline approach DUET and pre-train the VLN agent on both R2R data and PREVALENT data, which contains synthetic instructions for unannotated paths in the seen environments. * **Comparison with EnvDrop**. We adapt EnvDrop to DUET by replacing the regular dropout layer in the image encoder with environment-level dropout. Besides, we also compare with another environment-level data augmentation approach EnvEdit [1], where we follow the batch mixing approach in EnvEdit and randomly replace half of the data in a batch with the edited environments which change the appearance of the objects during VLN finetuning. The results shown in the Table below demonstrate the effectiveness of training with our PanoGen environments. | Model | TL | NE | SR | SPL | |---------|-------|------|-------|-------| | EnvEdit | 13.61 | 3.03 | 72.80 | 63.17 | | EnvDrop | 13.28 | 3.12 | 72.58 | 62.40 | | PanoGen | 13.40 | **3.03** | **74.2** | **64.3** | [1] EnvEdit: Environment Editing for Vision-and-Language Navigation. In CVPR 2022.
Summary: In this paper, the authors propose to leverage the generative model to create panoramic images for agent training. A recursive inpainting method is adopted to generate 360-degree panorama views, which aims to ensure the co-occurrence of objects and enough diversity. Experiments are conducted on R2R, R4R, and CVDN datasets, confirming the effectiveness of the proposed method. Strengths: - The proposed pipeline is reasonable. - The performance improvement is promising when using PANOGEN. - The paper is well-written and easy to follow. Weaknesses: - My main concern is the lack of novelty. In my view, the authors simply leverage and modify the existing text2img model to conduct data augmentation for VLN agents, and there is no specific design for the VLN method. Thus I believe this work does not provide enough technical contribution to the VLN community. - As shown in Table 2 and 3, compared to DUET-CLIP on R2R, the proposed method does not show performance improvement on the SPL. - More qualitative results of generated panoramic environments need to be provided for better examining the spatiotemporal consistency of panoramas. - Lacking experiments on datasets with high-level instruction such as REVERIE. - Missing some references such as [1-4]. [1] HOP+: History-enhanced and Order-aware Pre-training for Vision-and-Language Navigation. In TPAMI. [2] Adaptive Zone-Aware Hierarchical Planner for Vision-Language Navigation. In CVPR 2023. [3] LANA: A Language-Capable Navigator for Instruction Following and Generation. In CVPR 2023. [4] Reinforced Structured State-Evolution for Vision-Language Navigation. In CVPR 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why were the performance results of DUET-CLIP on the test set not provided? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer tReg** > Q1: Novelty. To adapt the text2image model for VLN, our main technical contributions include: * We propose a novel inpainting way to generate consistent panorama views for VLN instead of single image views (Sec. 3.2). * We propose a multi-modal transformer-based model to generate instructions for our PanoGen environments (Sec. 4.3), which shows superior performance than the baseline speaker when evaluated on downstream tasks, and higher similarity with the PanoGen environments. * We propose two ways of effectively incorporating PanoGen environments in VLN training -- (1) utilizing the generated instruction-trajectory pairs in pre-training (Sec. 4.3) (2) replacing observations with augment environments during fine-tuning (Sec. 4.4). We believe all these aspects are novel and have specific designs suited for VLN tasks, and hence achieve SotA on multiple VLN tasks (i.e., R2R, CVDN, REVERIE). Besides, our approach can be potentially generalized to other text-guided embodied tasks in panoramic environments, and impact a larger research field besides VLN. > Q2: Performance compared with DUET-CLIP on R2R. Though having slightly lower SPL on R2R, it achieves significant improvement on CVDN (0.43m absolute improvement in GP, which is a relative 7% improvement in GP). Besides, it also achieves better performance on REVERIE as we showed in Q4. > Q3: Qualitative results of PanoGen environments. We provide two more qualitative examples of our PanoGen environments in Appendix Figure 1. Besides, we provide one more panorama trajectory-instruction pair example in the general response pdf for qualitative analysis. Our PanoGen environments contain similar semantics as the original environments, while being much more diverse. The instruction generated by our mPLUG based speaker also contains more detailed instructions. > Q4: REVERIE performance. We show our approach’s performance on REVERIE in the following Table. Both pre-training with our speaker data and fine-tuning with observation replacement achieve a large performance improvement (4.6% absolute improvement in SR) than the CLIP baseline on the REVERIE dataset. | Model | SR | SPL | RGS | RGSPL | |-----------------|-------|-------|-------|-------| | DUET | 46.98 | 33.73 | 32.15 | **23.03** | | DUET-CLIP | 46.58 | 34.14 | 31.70 | 22.89 | | PanoGen+mPLUG | 49.22 | 33.44 | 32.80 | 22.45 | | PanoGen+Replace | **51.18** | **34.99** | **33.26** | 22.99 | > Q5: Missing references. Thanks so much for pointing out these references. We will definitely add and discuss them in the final version. > Q6: DUET-CLIP test leaderboard performance. The test leaderboard only has limited submission times, and the community usually uses it to submit only the final models instead of intermediate baselines (to avoid excessive utilization of the test set and maintain its blind nature). The validation unseen set also contains unseen environments during training to test the generalization ability of our proposed agents, which our agent shows better performance than the baseline approach.
Rebuttal 1: Rebuttal: **General Response** We thank all the reviewers for their thoughtful feedback. We are glad that they find our work novel and creative (Reviewer vRWy, wyXn), and provides a cost-effective way to tackle the data scarcity problem for VLN and potentially more general robotics learning (Reviewer vRWy, wyXn, PpHD). We thank them for recognizing the promising improvement brought by our approach (Reviewer tReg, Wxjf, VrWy), and acknowledging that our ablation studies as comprehensive and insightful (Reviewer Wxjf, wyXn). We thank them for thinking our paper as well-written and easy to follow (Reviewer tReg, Wxjf, wyXn). We address their questions below and will include them in the final version. Pdf: /pdf/abda2425194e940287ca7af5f9aff7f1fdd957d4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a creative system-level solution for VL-navigation agent training, by incorporating the recent T2I model to help increase the data diversity while follow the context and human intuition, which facilitate effective pre-training for performing domain-specific tasks. The proposed pipeline with image capturing and recursive in-painting can potentially create an infinite number of diverse panoramic environments conditioned on text. The author has further proposed two ways to demonstrate the usage of this generated panoramic data, by replacing existing images or generating new trajectory-instruction pairs. They have well demonstrated its effectiveness on existing VL-Nav benchmarks, with better generalization to unseen environments. Strengths: The proposed method of leveraging existing T2I models for VL-Nav tasks is creative and provides a cost-effective way to generate rich data for training embodied agents. In addition to the effective method of generating panoramas for indoor environments, the author also provides a complete pipeline for using the data and evaluates their idea on well-known public benchmarks. This required considerable effort, but will inspire follow-up work in the community to leverage pre-trained generative models to create diverse training data for downstream tasks. The proposed framework has achieved exciting results in improving existing methods in instructed navigation tasks, such as improving goal progress by 1.59 meters on the CVDN test set. The ablation experiments with different ways and ratios of using the generated panorama data are also commendable, as they help to achieve a better system design. Weaknesses: - It seems that the comparison to previous practices is missing. There have been multiple different practices used to avoid overfitting and increase data diversity, such as manually doing domain randomization, doing re-lighting to existing environments, or doing joint training over multiple different datasets for the same task. If possible, it would be helpful to know how this proposed framework compares to existing practices. - Another concern is the quality of the synthetic data. While nearby view in-painting might make sense, the final generated panorama can sometimes be counterintuitive and lack loop closure characteristics. The generated trajectory-instruction pair can also sometimes make no sense, and in the replaced view case, the image may differ from the original one a lot regarding the layout. It would be helpful to know if there is a way to measure the quality of the generated data besides the final evaluation in the target tasks, since some generated data might be harmful to the tasks. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: When generating realistic trajectory from the panorama images, will any depth information (like from pre-trained depth) be used as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer VRWy** > Q1: Comparison with previous practices which avoid overfitting and increase data diversity. First, our approach is compatible with the existing instruction augmentation approach PREVALENT[1]. As we described in L229-L231, we follow our baseline approach DUET and pre-train the VLN agent on both R2R data and PREVALENT data, which contains synthetic instructions for unannotated paths in the seen environments. Besides, we further compare our approach with two existing approaches that augment the environments to avoid overfitting: EnvEdit[2] and EnvDrop[3]. For adapting EnvEdit to DUET, we follow the batch mixing approach in EnvEdit and randomly replace half of the data in a batch with the edited environments which change the appearance of the objects during VLN finetuning. For adapting EnvDrop to DUET, we replace the regular dropout layer in DUET with the proposed environment-level dropout layer during fine-tuning. As the results shown below, training with our PanoGen environments achieves better performance than previous approaches. | Model | TL | NE | SR | SPL | |---------|-------|------|-------|-------| | EnvEdit | 13.61 | 3.03 | 72.80 | 63.17 | | EnvDrop | 13.28 | 3.12 | 72.58 | 62.40 | | PanoGen | 13.40 | **3.03** | **74.2** | **64.3** | > Q2: Measurement of quality of generated data. We measure the quality of our generated data from two aspects. * **Similarity between instructions and environments.** First, we measure the similarity between the generated instructions and the PanoGen environments. We hypothesize that higher similarity between instruction and trajectory pairs in embedding space can indicate better alignment between instruction and trajectory. Specifically, we represent the trajectory representation by averaging the image embeddings of the viewpoints in the trajectory. The image embedding of each viewpoint is encoded with CLIP-ViT/16. We also encode the instruction with CLIP text encoder. We calculate the cosine similarity between the instruction representation and the trajectory representation. As shown in the Table below, we find that the similarity between our PanoGen environments and instructions generated with mPLUG is higher than instructions generated with baseline EnvDrop (No.2 vs No. 3). Besides, we calculate the similarity between randomly replacing 30% of the viewpoints with PanoGen environments and the original instructions (to mimic the observation replacement fine-tuning). We average the score over 5 runs to mitigate randomness. We find that randomly replacing the observation doesn’t lead to decrease in similarity (No. 4 vs No. 1). | No. | Instruction | Environment | Similarity | |-----|-------------|---------------------------|----------------------| | 1 | Original | Original | 0.2845 | | 2 | EnvDrop | PanoGen | 0.2669 | | 3 | mPLUG | PanoGen | 0.2714 | | 4 | Original | 30% PanoGen, 70% Original | 0.2893 ($\pm$0.0001) | * **Automatic BERTScore evaluation over instructions.** Second, we calculate the BERTScore of instructions generated by our mPLUG based speaker and the EnvDrop speaker. Specifically, we use both speakers to generate the instructions on the validation unseen set of R2R data. We use Bart-base as the base model to calculate the BERTScore. Our speaker achieves a BERTScore of 71.8, while the EnvDrop speaker achieves a BERTScore of 70.5. **Qualitative example.** Beside the above automatic evaluation results, we also include one panorama-instruction example in the general response pdf for qualitative analysis. Our PanoGen environments contain similar semantics as the original environments, while being much more diverse. The instruction generated by our mPLUG based speaker also contains more detailed instructions. **Loop closure characteristics.** Lastly, we are working on improving the loop closure characteristics of our panoramas. Specifically, we generate the last view by conditioning on both its nearby views and inpaint the middle observation. Due to time limitations, we cannot share the results on the VLN task yet, but we will include the results in the final version (and will also try to report it before the rebuttal discussion periods end), as an investigation for how much loop closure characteristics impact the navigation performance. > Q3: Utilization of depth information. Depth information is not used in our trajectory generation. We believe utilizing depth information for generating environments that are more consistent in 3D space can be interesting future work. [1] Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training. In CVPR 2020. [2] EnvEdit: Environment Editing for Vision-and-Language Navigation. In CVPR 2022. [3] Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout. In NAACL 2019. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation and supporting results! My concerns have been well addressed. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and positive engagement! We are glad that our response well-addressed all your questions.
null
null
null
null
null
null
Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
Accept (poster)
Summary: This paper simultaneously addresses the label imbalance problem and uncertainty qualification capability in regression. The authors propose to enhance the reweighting technique dealing with the imbalance problem in (Yang et al., 2021) to be applicable to VAE and combine the method with the output distribution and the corresponding loss in (Amini et al., 2020), which provides uncertainty qualification capability. Experimental results on several real-world datasets demonstrate that the proposed method performs better than state-of-the-art imbalanced regression methods in terms of both accuracy and uncertainty estimation. Strengths: ### Originality: - The authors simultaneously address the label imbalance problem and uncertainty estimation capability in regression, which is a novel problem setting. ### Quality: - The combination of the reweighting technique in (Yang et al., 2021) and the output distribution and the corresponding loss in (Amini et al., 2020) is non-trivial and works well. - The authors showed the superiority of the proposed method experimentally in terms of both accuracy and uncertainty estimation, where they used multiple public datasets. ### Clarity: - The presentation is clear. Weaknesses: - Good combination of the SOTA, but the originality can be limited because of that. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: They discussed that the exact computation of variance of the variances is challenging in Section 5.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and encouraging comments as well as the insightful questions. We are glad that you find the problem we address ``"novel"``, our method ``"non-trivial"``/``"works well"``, our presentation ``"clear"``, and that experiments show our method's ``"superiority"`` ``"in terms of both accuracy and uncertainty estimation"``. Below we address your question. **Q1: "Good combination of the SOTA, but the originality can be limited because of that."** Thank you for acknowledging our contribution and our SOTA performance. We should have better highlighted our originality in the main paper. Specifically, our VIR not only combines existing SOTA methods (e.g., DIR), as you mentioned, but also presents substantial divergence from them (see a more in-depth discussion of these differences in Section 1.2 of the Supplementary Material): **(1)** VIR is a deep generative model to define how imbalanced data are generated, which is learned by a principled variational inference algorithm. In contrast, DIR is a simply discriminative model (without any principled generative model formulation) that directly predicts the labels from input. It is more prone to overfitting. **(2)** DIR uses deterministic representations, with one vector as the final representation for each data point. In contrast, our VIR uses probabilistic representations, with one vector as the mean of the representation and another vector as the variance of the representation. Such dual representation is more robust to noise and therefore leads to better prediction performance. **(3)** DIR is a deterministic model, while our VIR is a Bayesian model. Essentially VIR is equivalent to sampling infinitely many predictions for each input data point and averaging these predictions. Therefore intuitively it makes sense that VIR could lead to better prediction performance. **(4)** Different from VAE and DIR, VIR introduces a reweighting mechanism naturally through the **pseudo-count formulation** in the NIG distribution (discussed in the paragraphs Intuition of Pseudo-Counts for VIR and From Pseudo-Counts to Balanced Predictive Distribution of the paper). Note that such a reweighting mechanism is more natural and powerful than DIR since it is rooted in the probabilistic formulation. Besides methodological originality, our other contributions include: **(a)** We identify the problem of probabilistic deep imbalanced regression as well as two desiderata, balanced accuracy and uncertainty estimation, for the problem. **(b)** As a byproduct, we also provide strong baselines for benchmarking high-quality uncertainty estimation and promising prediction performance on imbalanced datasets. We will include the discussion above in the main paper of our revision to better highlight our originality as suggested.
Summary: The authors propose a variational regression model for imbalanced data, which (1) borrows data with similar regression labels for variational distribution (neighboring and identically distributed: N.I.D.) and (2) utilize the conjugate distributions to impose probabilistic reweighting on the imbalanced data to give better uncertainty estimation. Experiments show the proposed model achieves the SOTA on several datasets. Strengths: - Imbalanced regression is a research area that is yet to be explored, although its real-world application is important. - In addition, the proposed model can provide uncertainty of the prediction, which is important for real-world applications. - The uncertainty estimation on imbalanced datasets is also an interesting field but is yet to be explored. - Overall, the present topic is very relevant in the community. I encourage the authors to develop and explore this direction more, in view of the nice performance of the model. - The performance of the proposed model is excellent. Both of the accuracy and uncertainty estimation outperform the SOTA models. - The present paper is easy to follow and well-written. I enjoyed reading it. I found some nice "road signs" for the readers to follow the logic and story. Weaknesses: - The code for reproduction is not available. I would like to see and use your code. I could not find any other major weakness. - See Questions below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Here are my questions and comments. - [Question] Just out of curiosity. What is the optimal prior for VIR? For the standard VAE, the optimal prior (that maximizes ELBO) is known to be the aggregated posterior. This might lead to an interesting future work. - [Comment (minor)] "NIG" on L.137 should be spelled out here, rather than on L. 213, or define NIG on L. 40. - [Comment (super-minor)] (1), (2), ... in the main text might sometimes be confused with the number of equations. Possible alternatives are: (i), (ii),..., (I), (II), ..., (a), (b), ..., (A), (B), ..., 1), 2), ..., etc. - [Question] Where does the idea of the hierarchical structure of the statistics (L. 166--195) (statistics of statistics for smoothing) come from? What motivated this idea? This is very interesting. - [Comment (minor)] L. 191: cross -> across? - [Comment (minor)] L. 212: correspond -> corresponding? - [Question (major)] At first glance, I could not fully understand why Eqn. 5 alleviate the imbalance problem. Could you clarify the underlying mechanism more? - [Question (major)] Is the performance improvement (accuracy and uncertainty) "significant"? If so, in what sense? (Sorry for the vague question, but this is an important point when evaluating model's performance and paper's quality because a marginal improvement is sometimes meaningless.) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are included in Section 5.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and encouraging comments as well as the insightful questions. We are glad that you find the problem we address ``"important"``/``"interesting"``, our performance ``"excellent"``, and our paper ``"easy to follow"``/``"well-written"``. Below we address your questions one by one. **Q1: "The code for reproduction is not available. I would like to see and use your code. I could not find any other major weakness."** Thank you very much for your interest. We have finished cleaning up the source code and will release it after the paper is accepted to facilitate further research in the community. **Q2: "What is the optimal prior for VIR? For the standard VAE, the optimal prior (that maximizes ELBO) is known to be the aggregated posterior. This might lead to an interesting future work"** Thanks for the insightful question. Our optimal prior is a *neighbor-weighted* version of aggregated posterior: for stanard VAE, different data points contirbute **independetly** to the aggregated posterior; in contrast, for our VIR, the importance of each data point with repspect to the aggregated posterior is **affected by** data points with neighboring labels. We will include the discussion above in the revision to provide more insight into the difference between VAE and VIR. **Q3: ""NIG" on L.137 should be spelled out here, rather than on L. 213, or define NIG on L. 40."** Thank you for your suggestion. We will fix this in the revision. **Q4: "... in the main text might sometimes be confused with the number of equations..."** Thank you for your suggestion. We will revise our paper accordingly in the revision. **Q5: "Where does the idea of the hierarchical structure of the statistics (L. 166--195) (statistics of statistics for smoothing) come from? What motivated this idea? This is very interesting."** This is a good question and thank you for your interest. **(1) Statistics.** The requirement to perform feature smoothing to get the representation $z_i$ necessitates the computation of mean and variance of $z_i$'s neiboring data (i.e., data with neighboring labels). Here $z_i$ contains the **statistics** of neighboring data. **(2) Statistics of Statistics.** Furthermore, uncertainty estimation requires a stochastic representation for $z_i$, e.g., the mean and variance of $z_i$ (note that $z_i$ itself is already a form of statistics). This motivates the hierarchical structure of the statistics, i.e., **statistics of statistics**. Here the variance measures the uncertainty of the representation. We appreciate your suggestion and will incorporate this discussion into the revised version of our paper. **Q6: "L. 191: cross -> across? L. 212: correspond -> corresponding?"** We are sorry for the typos, and will fix them in the revision. **Q7: "At first glance, I could not fully understand why Eqn. 5 alleviate the imbalance problem. Could you clarify the underlying mechanism more?"** We are sorry for the confusion. Please refer to **Q2 in the Global Response**. **Q8: "Is the performance improvement (accuracy and uncertainty) "significant"? If so, in what sense? (Sorry for the vague question, but this is an important point when evaluating model's performance and paper's quality because a marginal improvement is sometimes meaningless.)"** Thank you for mentioning this. According to your suggestion, we ran the corresponding hypothesis tests, and the p values are in the range of $(9.08 \times 10^{-7}, 3.38 \times 10^{-4})$, much lower than the threshold of $0.05$ and therefore verifying the significance of VIR's performance improvement. We will include these results in the revision to strengthen the paper as suggested. --- Rebuttal Comment 1.1: Title: Reply by Reviewer 9XGJ Comment: Thank you for the reply. I carefully read all the reviews and responses. - All of my questions and concerns are addressed. - The global response and the discussion with Reviewer cFEB are interesting and will make the paper more convincing. - I think the fact that no other BNN models are not included in the paper is not a major probolem because the proof-of-concept is already done in the main text and the result is convincing. - As for the number of bins, it is a common problem when we use the quantization- (= binning-) based approach to regression problems; it is a common hyperparameter. - The global response shows that the proposed model is relatively robust to the number of bins. Overall, I strongly support the acceptance of the present paper and keep the score as is: - The topic is relevant and important for real-world applications. - The topic is interesting because this is an interdisciplinary research between imbalanced regression and Bayesina inference. - I could not find major technical flaws. - Author's responses are insightful and make sense, which will make the paper better after revision. - The code will be published. - The experimental results are convincing. Excellent work! --- Reply to Comment 1.1.1: Title: Thank You Comment: Thank you for your encouraging and detailed further response! We are glad that you have a thorough understanding of our contributions, find them novel, and acknowledge our rebuttal address all the questions/concerns (including the points on "no other BNNs", "number of bins" and the discussion with Review cFEB). We would be immensely grateful if you could consider raising the confidence score to reflect the current assessment. Thank you again! Best regards, Authors of VIR
Summary: This paper proposes a variational imbalanced regression model by taking the Neighboring and Identically Distributed (N.I.D.) assumption to solve both imbalanced regression and uncertainty estimation problems. Experiments on four imbalanced datasets demonstrate the effectiveness of the proposed method. Strengths: 1) Compared with imbalanced classification, imbalanced regression is underexplored and also an important topic. 2) The Neighboring and Identically Distributed (N.I.D.) assumption seems more reasonable than the Independent and Identically Distributed (I.I.D.) assumption. 3) The proposed method improves not only the performance of few-shot region, but also the performance of many-shot region. Weaknesses: 1) The description of some parts of the paper is not clear, e.g., how to define the neighboring labels? What is the detailed formulation of the importance weights in line 237? 2) Some statements in the paper are somewhat subjective and lack support, e.g., in line 42-43, the authors claim that "This allows the negative log likelihood to naturally put more focus on the minority data", but I do not find any "naturally" thing, the important weights are mainly determined by the kernel functions, which needs manually selection; in line 273, the authors claim that "Such dual representation is more robust to noise", but they neither list any references nor conduct any experiments to support it. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) What are the new challenges of imbalanced regression compared with imbalanced classification? 2) Why do you partition the label space into B equal-interval bins? Does the choice of B affect the performance of VIR? 3) The choice of the kernel functions and experiments for the corresponding parameters selection should be given since it determines the importance weights. 4) In section 4.1, the authors claim that N.I.D. will lead to a lower generalization error just by simple text description, without any rigorous computation or derivation, this is not convincing for me. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and encouraging comments with insightful questions. We are glad that you find the problem ``"important"`` and our N.I.D. assumption ``"reasonable"``, and acknowledge that our method improves performance. Below we address your questions in detail one by one. **Q1: "... how to define the neighboring labels? ... detailed formulation of the importance weights in line 237?"** We are sorry for the confusion and the typo. In Line 237~241, it should be $(\sum_{b'\in B}k(y_b,y_{b'})p(y_{b'}))^{-\frac{1}{2}}$ rather than $(\sum_{b'\in B}k(y_b,y_{b'}))^{-\frac{1}{2}}$. Here $B$ is the set of bin $b$'s neighboring bins, $k(y_b,y_{b'})$ is a kernel measuring the distance between labels $y_b$ and $y_{b'}$, and $p(y_{b'})$ is the frequency (density) of label $y_{b'}$ in the dataset. We use a **Gaussian kernel** $k(a, b)=\exp({-\frac{(a - b)^{2}}{2\sigma^2}})$. For fair comparison, we use exactly the same parameter configuration as the DIR paper [1]. Specifically, we set $\sigma=2$; for label $y_b$ in bin $b$, we define neighboring labels as labels $y_{b'}$ such that $|y_{b'}-y_b|\leq 2$, i.e., $B$ contains $5$ bins. For example, if $y_b=23$, its neighboring labels are $21$, $22$, $23$, $24$, and $25$. With the definition above, we can see that data in the *minority* neighborhood (smaller $p(y_{b'})$) has *larger* importance weights $(\sum_{b'\in B}k(y_b,y_{b'})p(y_{b'}))^{-\frac{1}{2}}$ in the training objective function. We will include the details above in the revision as suggested. **Q2.1: "... allows the negative log likelihood to naturally put more focus on the minority data..."** We are sorry for the confusion. Please refer to **Q3 in the Global Response**. **Q2.2: "...needs manually selection;..." "The choice of the kernel functions...should be given..."** This is a good question. Please refer to **Q1 above** and **Q4 in the Global Response** . **Q2.3: "..."Such dual representation is more robust to noise"...references...experiments..."** We are sorry for the confusion. Please refer to **Q2 in the Global Response**. **Q3: "...new challenges of imbalanced regression compared with imbalanced classification?"** This is a good question. As shown in **Figure 1 of the 1-page PDF**, we used two datasets, CIFAR-100, a 100-class classification dataset [2] and IMDB-WIKI [3], an age estimation dataset with labels in the range 0~99, to compare imbalanced data challenges in classification vs. regression. We adjusted their label ranges for consistency and simulated data imbalance, ensuring identical label density distribution, as seen in Fig. 1(a) in our global-response PDF. A ResNet-50 model trained on these datasets highlighted differences in test error distributions. Results from CIFAR-100 showed a negative correlation between test error and label density ($-0.76$), which is expected since classes with more samples often have lower errors. However, IMDB-WIKI, despite having the same label density as CIFAR-100, had a more uniform error distribution that **did not align as closely with label density ($−0.47$)**, as shown in Fig. 1(b). *This distinction highlights unique challenges in imbalanced regression*. Traditional imbalanced learning methods, which address the imbalance in the **empirical** label density, work for classification but might falter for regression with continuous labels. The challenge grows in *probabilistic* imbalanced regression, where both accuracy and *uncertainty estimation* matter. Addressing these challenges are therefore the focus of our paper. **Q4: "Why...partition...into B equal-interval bins? Does the choice of B affect the performance of VIR?"** This is a good question. Please refer to **Q1 in the Global Response**. **Q5: "In section 4.1, the authors claim that N.I.D. will lead to a lower generalization error just by simple text description, without any rigorous computation or derivation, this is not convincing for me."** We are sorry for the confusion. We meant to use Section 4.1 to discuss *intuition* of our methods. We have prepared a rigorous proof and will include it in the Supplementary Material of our revision. Below we discuss the main idea of the proof. In general, the generalization error is bounded with probability $1-\eta$ by: test error $\leq$ training error + bias + variance, where bias $=\frac{\Delta}{N} \sum_{(x,y)} |1 - \frac{P_{y}}{\hat{P}_{y}}|$, variance $=\frac{\Delta}{N} \sqrt{\frac{\log ( 2 |{H}| / \eta)}{2}} \sqrt{\sum_{(x,y)}(\hat{P}_{y})^{-2}}$. Here (1) $\hat{P}_{y}$ is the smoothed label distribution used in our VIR's objective function, (2) $P_{y}$ is the label distribution, (3) $N$ is the number of data points, (4) $\Delta=y_{max}-y_{min}$, where $y_{max}$ and $y_{min}$ are the maximum and minimum labels in the dataset, respectively, and (5) $\mathcal{H}$ is the finite hypothesis space of prediction models. We can see that if one directly uses the original label distribution in the training objective function, i.e., $\hat{P}_{y}=P_y$: (a) The "bias" term will be $0$. (b) However, the "variance" term will be extremely large for minority data because $\hat{P}_{y}$ is very close to $0$. In contrast, under N.I.D., $\hat{P}_{y}$ used in the training objective function will be smoothed. Therefore: (a) The minority data's label density $\hat{P}_{y}$ will be smoothed out by its neighbors and becomes larger (compared to the original $P_y$), leading to smaller "variance" in the generalization error bound. (b) Note that $\hat{P}_{y}\neq P_y$, VIR (with N.I.D.) essentially increases bias, but significantly reduces its variance in the imbalanced setting, thereby leading to a lower generalization error. We will include the discussion above and our proof in the revision as suggested. [1] Delving into Deep Imbalanced Regression. [2] Learning multiple layers of features from tiny images. [3] Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I have read the Global Response for Q1, but I do not figure out why you partition the label space into B equal-interval bins, can you make a detailed explanation? --- Reply to Comment 1.1.1: Title: Thank You for Your Further Response Comment: Thank you for your further response. This is a good question. **Why We Need Bins.** Throughout our method, we need to compute the statistics (i.e., the mean and variance) and the "statistics of statistics" of data points (Line 164-165); computing these statistics (e.g., the mean) requires a group of data points. Therefore, we need to partition the continuous label size into $\mathcal{B}$ bins. For example, in the equations from Line 176-177, e.g., $\mu_b^{\mu} = \frac{1}{N_b} \sum\nolimits^{N_b}_{i=1} z_i^{\mu}$, we need to compute the statistics of bin $b$, which contains $N_b$ data points in the bin. As mentioned by Reviewer 9XGJ, it is ``"common"`` to ``"use the quantization- (= binning-) based approach to regression problems"``. It is also worth noting in the extreme case where (i) each data point has a different label $y$ and (ii) we use a very small bin size, each bin will then contain exactly only one data point. **Equal-Interval Bins versus Equal-Size Bins.** Note that since our smoothing kernel function is based on labels (i.e., $k(y, y')$), it is more reasonable to use **equal-interval** bins rather than **equal-size** bins. **(1)** For example, if we use the equal-interval bins $[0,1),[1,2),...$, VIR will naturally compute $k(y, y')$ for $y=1,2,3,4,5,...$ and $y'=1,2,3,4,5,...$. **(2)** In contrast, if we use equal-size bins, VIR may end up with *large intervals* and may lead to inaccurate kernel values for $k(y, y')$. To see this, consider a case where equal-size bins are $[0,1),[1,2),[2,3.1),[3.1,8.9),...$; the kernel value $k(y, y')$ between bins $[2,3.1)$ and $[3.1,8.9)$ is $k(2,3.1)$, which is very inaccurate since $3.1$ is very far away from the mean of the bin $[3.1,8.9)$ (i.e., $6$). Using small and equal-interval bins can naturally address such issues. Thank you again for keeping the communication channel open, and we will be very happy to provide more details if you have any further questions.
Summary: In this work, authors recognize that although the existing regression models for the imbalanced datasets have been mainly developed to improve the prediction accuracy, they overlooked the quality of the uncertainty estimation. In this context, authors propose a deep probabilistic regression framework, to improve the uncertainty estimation performance as well by combining the idea of [1] and [2]. Specifically, authors first consider multiple bins, splitted on the range of labels to get statistic of the labels. Then, they revise the latent statistic of VAE for the imbalanced datasets, by smoothing these features based on the statistic on each bins and then applying probabilistic whitening and recoloring procedure. Next, they use the revised latent features to get the posterior distribution of NIG distribution, which acts as the pseudo counts that alleviates the issue of imbalanced sets. Last, they employ these parameters for prediction and training. Empirically, authors demonstrate that the proposed approach can improve the performance of the prediction accuracy and uncertainty estimation on various datasets. [1] Delving into Deep Imbalanced Regression - ICML 21 [2] Deep Evidential Regression - NeurIPS 20 Strengths: * This work extends the DIR of [1], as the probabilistic model, to estimate the uncertainty. * This work considers to use the NIG posterior distribution, updated by the statistic of the stochastic latent feature. This seems to help balance the latent features of the imbalanced labels and thus yield the credible uncertainty for the imbalanced datasets. I believe that this is a novel part of this work as comparing [1] and [2]. [1] Delving into Deep Imbalanced Regression - ICML 21 [2] Deep Evidential Regression - NeurIPS 20 Weaknesses: * Absence of ablation study > In experiment section, it seems that the proposed method improves the prediction accuracy and uncertainty estimation. However, the current work does not investigate (1) whether each trick of the proposed method, such as the use of the stochastic latent feature (VAE) and use of the posterior distribution (NIG), is effective and (2) whether the proposed method is consistent up to the number of bins. * Less explanation on why the evidential regression model is used along with DIR, instead of using other BNNs. > In general, BNNs is widely used to estimate the uncertainty. I believe that applying the BNNs with DIR could be a direct way to solve the targeted problem. However, author takes the evidential regression approach, without explaining the reason or its motivation. This seems to less persuade why the proposed method is reasonable. If authors provide the motivation of this approach or can demonstrate that the proposed method could outperform the results of the DIR, obtained by BNNs or other approach for uncertainty estimation, I believe that the contribution of this work would be more clear and strong. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Is the performance of the proposed method consistent up to the number of bins? How can performance vary with the number of bins? * Why did the authors employ the stochastic latent feature through VAE, that is different approach to the [1] ? Does the stochastic feature improve the performance as comparing when the deterministic statistic is used to update the parameters of the NIG posterior distribution ? * Did the authors consider the BNNs with DIR or the Deep ensemble approach [2] with DIR to solve the imbalanced regression problem ? [1] Delving into Deep Imbalanced Regression - ICML 21 [2] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles - NeurIPS 17 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and insightful questions. We are glad that you find our work ``"novel"``. Below we address your questions one by one. **Q1.1: "...ablation study...not investigate (1) whether each trick ... the stochastic latent feature (VAE) ... the posterior distribution (NIG), is effective ..."** Thank you mentioning this. Actually we did perform ablation studies and the results have been included in the Supplementary Material (please see Tables 8 and 9). Besides, according to your suggestions, we have also performed additional ablation studies. The results in Tables 3.1 through 3.2 below (more results in **Table 1 of the one-page PDF file**) demonstrate the effectiveness of each element in our proposed method. **VIR w/o VAE** refers to VIR *without stochastic latent features*, i.e., "the deterministic statistic is used to update the parameters of the NIG posterior distribution". **VIR w/o NIG** refers to VIR without NIG. Note that **VIR w/o VAE & NIG** is equivalent to **LDS+FDS+DER** in our main paper; here we include its results in the table as a reference to further demonstrate the effectiveness of our method. Table 3.1: Ablation studies on AgeDB in terms of MSE | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |VIR w/o VAE & NIG| 112.62 | 94.21 | 140.03 | 210.72 | |VIR w/o NIG| 87.48 | 73.72 | 107.64 | 161.69 | |VIR w/o VAE| 96.46 | 86.72 | 102.56 | 171.52 | |VIR (Ours)| **81.76** | **70.61** | **91.47** | **142.36** | ||||| Table 3.2: Ablation studies on AgeDB in terms of NLL | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |VIR w/o VAE & NIG| 3.787 | 3.689 | 3.912 | 4.234 | |VIR w/o NIG| 3.722 | 3.604 | 3.821 | 4.209 | |VIR w/o VAE| 3.784 | 3.685 | 3.866 | 4.218 | |VIR (Ours)| **3.703** | **3.598** | **3.805** | **4.196** | ||||| It's worth noting that we included similar ablation studies in Tables 8 and 9 of the Supplementary Material, where the "Encoder-only VIR" is equivalent to "VIR w/o NIG & LDS", and "Predictor-only VIR" is equivalent to "VIR w/o VAE & LDS". These results also verify the effectiveness of each element in our proposed method. we will incorporate the additional ablation studies above into the Supplementary Materials in the revision as suggested. **Q1.2: "... method is consistent up to the number of bins." " performance vary with the number of bins?"** This is a good question. Please refer to **Q1 in the Global Response**. **Q2: "... why the evidential regression model is used along with DIR, instead of using other BNNs."** This is a good point. We do not consider other BNNs in this work because: **(1)** **Weights** in Bayesian Neural Networks (BNNs) are extremly high-dimensional; therefore BNNs have several limitations, including the intractability of directly inferring the posterior distribution of the **weights** given data, the requirement and computational expense of sampling during inference, and the question of how to choose a **weight** prior [3]. In contrast, evidential regression does not have these challenges. **(2)** In our preliminary experiments, we found that typical BNN methods suffer from computational inefficiency and would require at least two to three times more computational time and memory usage. In contrast, evidential regression does not involve such computation and memory overhead; its overhead only involves the last (few) layers, and is therefore minimal. **(3)** Additionally, as demonstrated in [2], Deep Ensemble typically performs as well as or even better than BNNs. Our method outperforms **Deep Ensemble** (Tables 1~6 in the paper, with more results in **Q4** below), therefore suggesting its superiority over typical BNN methods. We appreciate your suggestion and will incorporate the discussion above into our revised paper. **Q3: "Why ... employ the stochastic latent feature through VAE ...? Does the stochastic feature improve ...?"** We are sorry for the confusion. Please refer to **Q2 in the Global Response**. **Q4: "... consider the BNNs with DIR or the Deep ensemble [2] with DIR ...?"** As mentioned **Q2**'s response, we did not include other BNNs due to their limitations in the context of our work. Furthermore, as demonstrated in [2], Deep Ensemble typically performs as well as or even better than BNNs. Our method outperforms Deep Ensemble (Tables 1~6 in the paper), therefore suggesting its superiority over typical BNN methods. According to your suggestion, we ran additional experiments on **combining Deep Ensemble and DIR** and report the results in Tables 4.1~4.2 below (more results in **Table 3 of the one-page PDF file**). These results show that while this combination allows DIR to produce uncertainties, our method can still outperform it by a large margin. Thanks for your insightful question, and we will include these additional results and the discussion above in the revision. Table 4.1: Results for DIR + Deep Ensemble in terms of MSE | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |DIR + Deep Ensemble| 94.10 | 80.24 | 109.45 | 182.52 | |VIR (Ours)| **81.76** | **70.61** | **91.47** | **142.36** | ||||| Table 4.2: Results for DIR + Deep Ensemble in terms of NLL | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |DIR + Deep Ensemble| 5.069 | 4.772 | 4.574 | 5.236 | |VIR (Ours)| **3.703** | **3.598** | **3.805** | **4.196** | ||||| [1] Delving into Deep Imbalanced Regression. [2] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. [3] Deep Evidential Regression.
Rebuttal 1: Rebuttal: We thank all reviewers for their encouraging and constructive comments. We are glad that they found the problems we identified ``"important"``/``"novel"`` (ivhk, 9XGJ, C6MR), our idea/method ``"novel"``/``"non-trivial"``/``"reasonable"`` (cFEB, C6MR, ivhk), our paper ``"easy to follow"``/``"clear"`` (9XGJ, C6MR), and that our method has ``"excellent"`` performance (9XGJ) and ``"superiority"`` (C6MR) over SOTA methods in terms of both ``"accuracy"`` and ``"uncertainty estimation"`` (C6MR, ivhk). Below we address the reviewers' questions one by one. Due to the space constraint, i.e., 6000 characters per reviewer, we cannot cover every question in the rebuttal, but we promise to address all questions, cite all related references, and **include all discussions/results below in our revision**. **[cFEB, ivhk] Q1. The Number of Bins.** Our preliminary results indicate that the performance of our VIR (as compared to the SOTA baseline, i.e., Ranksim [2]) remains consistent regardless of the number of bins, as shown in the included tables below (more results in **Table 2 of the one-page PDF file**). Here we report results for the cases with $100/1=100$, $100/3\approx 33$, and $100/5=20$ bins. Table 1: Ablation studies on the number of bins in terms of MSE | model | bins | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | :------: | | Ranksim | 100 | 83.51 | 71.99 | 99.14 | 149.05 | | VIR (Ours) | 100 | **81.76** | **70.61** | **91.47** | **142.36** | | Ranksim | 33 | 109.45 | 91.78 | 128.10 | 187.13 | | VIR (Ours) | 33 | **84.77** | **77.29** | **95.66** | **125.33** | | Ranksim | 20 | 98.71 | 84.38 | 107.89 | 171.04 | | VIR (Ours) | 20 | **84.05** | **72.12** | **100.49** | **151.25** | |||||| In our paper, we chose to use the same number of bins as the imbalanced regression literature [1, 2] for fair comparison with prior work. For example, in the AgeDB dataset where the regression labels are people's "age" in the range of 0~99, we use 100 bins, with each year as one bin. We will include the discussion and results above in the revised paper. **[cFEB, ivhk] Q2: Advantage of Stochastic Latent Features / Dual Representation.** That is a good question. Yes, stochastic latent features (i.e., dual representation) indeed improve the performance. We did conduct ablation studies in Table 8 and Table 9 of the Supplementary Material to verify this claim. In these tables, "Predictor-only VIR" is equivalent to our VIR without stochastic latent features and LDS. These results verify the effectiveness of such stochastic latent features. We have also conducted additional ablation studies to further support our claim. The results in Tables 2.1 and 2.2 below (more results in **Table 1 of the one-page PDF file**) demonstrate the importance of stochastic latent features (i.e., dual representation) in our proposed method. In the tables, **VIR w/o VAE** refers to VIR without stochastic latent features. These results verify that the stochastic latent features can indeed improve the performance. Table 2.1: Ablation studies on AgeDB in terms of MSE | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |VIR w/o VAE| 96.46 | 86.72 | 102.56 | 171.52 | |VIR (Ours)| **81.76** | **70.61** | **91.47** | **142.36** | ||||| Table 2.2: Ablation studies on AgeDB in terms of NLL | model | overall | many | median | few | | :---------: | :------: | :------: | :------: | :------: | |VIR w/o VAE| 3.784 | 3.685 | 3.866 | 4.218 | |VIR (Ours)| **3.703** | **3.598** | **3.805** | **4.196** | ||||| **[ivhk, 9XGJ] Q3: Why Eqn. 5 Focuses on Minority Data (Line 42-43) and Alleviates the Imbalance Problem.** We are sorry for the confusion. To see why, note that Eqn. 5 is the *negative log likelihood* for the Normal Inverse Gaussian (NIG) distribution. Specifically, each posterior parameter ($\nu_i^*, \gamma_i^*, \alpha_i^*$) of the NIG distribution is reweighted by importance weights, thereby assigning higher weights to minority data during training and allowing minority data points to benefit more from their neighboring information. Take $\nu_i^*$ as an example. Assume a minority data point $(x_i,y_i)$ that belongs to bin $b$, i.e., its label $y_i=y_b$. Note that there is **a loss term** $(y_i-\gamma_i^*)^2\nu_i^*$ in **Eqn. 5**, where $\gamma_i^*$ is the model prediction, $y_i$ is the label, and $\nu^{*}_{i}$ is the "importance weight" for this data point. Here $\nu_i^* = \nu_0 + (\sum_{b' \in \mathcal{B}} k (y_b, y_{b'}) p(y_{b'}))^{-1/2} \cdot n_i$ where $n_i$ represents the pseudo-count for the NIG distribution. Since $(x_i,y_i)$ is a minority data point, data from its neighboring bins has smaller frequency $p(y_{b'})$ and therefore smaller $\sum_{b' \in \mathcal{B}} k (y_{b}, y_{b'}) p(y_{b'})$, leading to **a larger "importance weight"** $\nu_i^*$ **for this minority data point in Eqn. 5**. This "allows the negative log likelihood (Eqn. 5) to naturally put more focus on the minority data" (Line 42-43), thereby alleviating the imbalance problem. **[ivhk] Q4: The Choice of the Kernel Functions.** The DIR paper [1] shows that a simple Gaussian kernel with inverse square-root weighting (i.e., SQINV) achieves the best performance. We therefore use SQINV thoughout our paper for fair comparison (more details in Line 307 of the main paper). Besides, our preliminary results also show that the performance is not very sensitive to the choice of kernels, as long as the kernel $k(a,b)$ reflects the distance between $a$ and $b$, i.e., larger distance between $a$ and $b$ leads to smaller $k(a,b)$. [1] Delving into Deep Imbalanced Regression. [2] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression. Pdf: /pdf/7e376ad76fe2f7eb0a922fae0c9f7e821996d05d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
HIQL: Offline Goal-Conditioned RL with Latent States as Actions
Accept (spotlight)
Summary: The paper introduces HIQL, a hierarchical algorithm for offline goal-conditioned RL. HIQL utilizes an action-free version of IQL to learn the value function and subsequently derives both high-level and low-level policies from this shared value function using AWR. The paper asserts that the hierarchical structure offers increased robustness to value function noise, leading to enhanced performance in achieving long-horizon goals. The effectiveness of the proposed method is validated through experiments conducted in both state-based and pixel-based tasks. Strengths: * Overall, the paper is well written and well organized. * The proposed method HIQL incorporates IQL into a hierarchical framework, similar to prior offline RL work POR. However, HIQL incorporates additional designs from hierarchical RL, including predicting latent waypoints and k-step waypoints. Consequently, it is a new approach from both directions, offering valuable insights for tackling offline GCRL with hierarchical RL. * The examples in Section 4.1 and Section 4.3 are helpful for readers to understand. * The experiments are thorough, showing improved performance on both state-based and image-based tasks. Weaknesses: * The given examples show that the hierarchical structure is more robust to value noises in discrete tasks. Generally, in continuous tasks, the value function tends to be smoother. This raises the question of whether the performance improvement observed in HIQL is solely attributed to the hierarchical structure. If the claim in the paper is right, a method using smoothed value function (e.g., [1]) without hierarchical structure may achieve comparable performance with HIQL. Additional explanation and comparison are encouraged. * More information to reproduce the Figure 2 need to be provided. * In the appendix, the authors concat [s,g] before feeding into $\phi$. Therefore, it is recommended that Figure 1 be revised to address this matter. * In the experiments, all the baselines considered are related to (weighted) imitation learning, making the inclusion of additional baselines utilizing temporal difference learning [2][3] necessary for a more comprehensive evaluation. In addition, it is notable that HIQL leverages the advantage of goal relabeling, while baselines such as IQL do not utilize this technique. Despite this, IQL has already demonstrated strong performance on the pixel-based Procgen Maze benchmark. Therefore, it remains unknown if HIQL can achieve comparable performance when compared to IQL+HER. * For the experiments, I encourage the authors to report the performance of the low-level policy without the high-level policy. This can effectively highlight the true usefulness of the hierarchical structure. * There is an error in Eq (7) in which the variable g is mistakenly used to train the low-level policy, while $s_{t+1}$ in the expectation is not used. [1] Hong Z W, Yang G, Agrawal P. Bilinear value networks[J]. arXiv preprint arXiv:2204.13695, 2022. [2] Chebotar Y, Hausman K, Lu Y, et al. Actionable models: Unsupervised offline reinforcement learning of robotic skills[J]. arXiv preprint arXiv:2104.07749, 2021. [3] Li J, Tang C, Tomizuka M, et al. Hierarchical planning through goal-conditioned offline reinforcement learning[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 10216-10223. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Additional explanation is needed to clarify how the hierarchical structure can be applicable in continuous situations, as well as a comparison with smoothed Q methods * More information should be provided to enable the reproduction of Figure 2. * Additional baselines for temporal difference learning and goal relabeling should be included in order to facilitate a more comprehensive and fair comparison. * Authors are encouraged to report the performance of the low-level policy in order to emphasize the necessity of a hierarchical structure. * Is it possible to expand the HIQL framework to include more than two levels? Considering that the parameter k is manually determined, the top level could be assigned a large k value, while a medium level would utilize a moderate k value. * A fix in Figure 1 and Eq (7) may be needed. ## post rebuttal I am delighted to see that the authors have addressed all of my concerns, and I am pleased to raise my score for this paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: One limitation mentioned in the paper is the necessity for deterministic dynamics. It would be beneficial for the authors to further elaborate on why disentangling the controllable aspects from the uncontrollable elements in the environment is advantageous. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and suggestions for improving the work. Below, we describe how we have revised the paper to clarify parts of the paper to address the questions raised by the reviewer. We also clarify how we have already compared to baselines that use HER, and present new results from three additional baselines. We believe that these changes strengthen the paper, and welcome additional suggestions for further improving the work. - **“It remains unknown if HIQL can achieve comparable performance when compared to IQL+HER”** We would like to clarify that “IQL” in this paper always refers to “**IQL+HER**” (i.e., goal-conditioned IQL) (L297) (and likewise “POR” in this paper always denotes “POR + HER”), and they use the exactly **same** goal relabeling strategy as HIQL. This relabeling strategy (described in Appendix A) is directly taken from a prior work without modification [15]; thus it is not tuned for our method, which ensures a fair comparison. To further clarify that the baselines are also goal conditioned and employ hindsight relabeling, we will refer to them as “GC-IQL” and “GC-POR” in the final version of the paper. * **Additional baselines that do not use (weighted) imitation learning** Thank you for the suggestion. We have now compared HIQL with **three** additional baselines that do not use (weighted) imitation learning. As we were unable to find publicly available official implementations of Actionable Models (AM) [17] or HiGOC [9], we tried to evaluate AM using both our own re-implementation and another re-implementation by Ma et al. [14]. However, despite our extensive efforts to tune its hyperparameters on two different codebases, we were unable to achieve performance above 1% even on the simplest AntMaze-Medium tasks. Hence, we instead evaluated the performance of **goal-conditioned CQL** (“GC-CQL”) [13], another temporal difference learning-based method that uses a similar [17] conservative objective to AM. For a fair comparison, we used the same hindsight relabeling strategy for GC-CQL and tuned its hyperparameters individually for each D4RL task. Moreover, we made additional comparisons with two recent offline goal-conditioned RL methods (Contrastive RL [11] and GCPC [2]) that report their performances on the goal-conditioned variants of D4RL tasks (AntMaze and/or Kitchen). **Contrastive RL** is a goal-conditioned value learning method based on contrastive learning, and **GCPC** is an offline goal-conditioned RL method that models trajectories using a BERT-style Transformer with masking techniques. For these methods, we directly used the reported results from the respective papers [2, 11]. We present the full comparison results (averaged over 4 seeds) on D4RL environments in **Table 1** in the supplementary PDF. The results show that HIQL mostly achieves the best performance in these tasks, outperforming (or at least being as good as) the new baselines. * **Flat IQL with a smoothed value function** As the reviewer pointed out, a regularized (smoothed) value function may also alleviate the signal-to-noise challenge in a continuous state space. To empirically verify this hypothesis, we evaluated the performance of flat goal-conditioned IQL (GC-IQL) with a bilinear value function $V(s, g) = f(s)^\top \psi(g)$ with 512-dimensional $f$ and $\psi$. We present the results across four different tasks in **Table 2** in the supplementary PDF. Despite the improved smoothness of the value function, Bilinear GC-IQL shows worse performance than the original GC-IQL. We believe this is due to the limited expressivity of bilinear value functions compared to the original monolithic value functions. We will include a discussion about continuous state spaces in the final version of the paper. * **Performance of the low-level policy without a high-level policy** As per the reviewer’s suggestion, we evaluated the performance of HIQL’s low-level policy (without a high-level policy) on D4RL tasks. **Table 3** in the supplementary PDF presents the results across four different tasks. The results show that HIQL without a high-level policy almost completely fails to solve these tasks. This is because the low-level policy is trained to reach only nearby goals (Eq. (7)). If we train the low-level policy with the full set of goals, it becomes equivalent to GC-IQL, whose performance still falls behind that of HIQL. This highlights the necessity of our hierarchical structure. * **Is it possible to expand the HIQL framework to include more than two levels?** Thank you for raising this point. As the reviewer mentioned, it is indeed possible to have a recursive structure, in which higher-level policies produce subgoals for the policies at the next level down, and only the lowest-level policy produces actions. At an earlier stage of this research, we tested 3- or 4-level hierarchies on AntMaze-Large with waypoint steps of (25, 5) or (100, 25, 5). However, we found that they do not significantly improve (or sometimes hurt) performance compared to the two-level policy structure. This is likely because (1) a two-level policy is sufficient for AntMaze-Large given its episode length, and (2) policy errors can accumulate as the hierarchy grows. However, we believe such a recursive structure has the potential to solve much longer-horizon problems (where an improved signal-to-noise ratio outweighs accumulated policy errors), and further extending HIQL in this way with a highly scalable architecture is an exciting future research direction. * **Code to reproduce Figure 2** We have uploaded the ipynb file that we used to produce Figure 2 to our anonymized repository. * **Improving Figure 1, Typo in Equation (7)** Thank you for the suggestions! We have addressed the issue with Figure 1 and fixed the typo in Equation (7) in a revised version of the paper. Please let us know if there are any additional concerns or questions. **References**: Please see our global response. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank you for providing such a detailed response. After thoroughly reading your explanation, I am pleased to say that all of my concerns have been addressed. The clarification of "IQL+HER" and the additional comparisons have significantly strengthened the paper's persuasiveness. I found the discussion on the multi-level hierarchical structure particularly intriguing, and I hope to see it included in the revised version of the paper. Overall, I am now inclined to raise my score to 7, and I am delighted to see this work accepted by the conference.
Summary: This paper introduces a hierarchical approach to address the issue of offline goal-conditioned problems, specifically when certain trajectories in the dataset have missing actions. The proposed method tackles this challenge by simultaneously learning a value function with a modified version of IQL and extracting the high-level and low-level policies of the learned value function. The high-level policy predicts the representation of waypoints, while the low-level policy, conditioned on the waypoint, determines the primitive actions. Extensive experimental results on offline goal-conditioned tasks demonstrate the effectiveness of the proposed approach. Strengths: 1. The writing is very easy to follow. 2. The paper proposes an elegant approach to goal-conditioned offline RL problem by decomposing the high-level and low-level policy while sharing the same value functions V. 3. The approach taken in this paper allows the adoption of the existing large amount of action-free data. 4. Good empirical performance on a comprehensive set of evaluation environments with both state and pixel observations. Weaknesses: For some of the design choices adopted in this paper, I think there are some alternatives in the literature that are good to compare with. See my comment in the question section. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. (**Clarification Question**) I am confused by the notation $V(s, \phi(g))$, $\pi(a|s, \phi(g))$. I understand that for the high-level policy, the waypoint it predicted is the latent representation, not the pixel values of the image. But is $s$ also represented by the same state representation/encoder $\phi$? So is it $V(\phi(s), \phi(g))$ (or with some stop gradient on either $\phi(s)$ or $\phi(g)) $? 2. (**How to learn the value function $V$**) In this paper, the author adapts the approach of the IQL algorithm to learn the value function V (with some modification to account for missing actions). But this is not the only approach to learn the value function. [1] proposes to learn the value-function V by dualizing the objective, which also does not require action labels in the dataset. Maybe it would be nice to compare this approach and see which approach gives better quality of the value function, since it is crucial for the extraction of high-quality policies. [1] Ma et al. VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive feedback about this work. * **How to encode $s$ in $V(s, \phi(g))$?** For pixel-based environments, we encode $s$ and $g$ separately into $\psi(s)$ and $\phi(g)$ using two *different* CNNs, $\psi$ and $\phi$. We then concatenate $\psi(s)$ to $\phi(g)$ and model $V(\psi(s), \phi(g))$ using an MLP. (For state-based environments, we directly concatenate $s$ to $\phi(g)$, without having a separate encoder for $s$.) Since only the $\phi(g)$ component is used outside as a goal representation, we simply denoted $V(\psi(s), \phi(g))$ as $V(s, \phi(g))$ in the paper (treating $\psi(s)$ as a “black box” within $V$). However, we recognize that this could have led to some confusion, and we will clarify this point in the final version of the paper. * **Alternatives to IQL for learning a goal-conditioned value function** As the reviewer pointed out, our hierarchical policy extraction scheme is *orthogonal* to the choice of the underlying offline RL algorithm used to learn a goal-conditioned value function $V(s, g)$. In this work, we chose to use IQL for its effectiveness and simplicity, as it does not require any other additional components and is easy to use. However, it is indeed possible to combine HIQL with other value-based offline RL algorithms, such as VIP [10], Contrastive RL [11], or Quasimetric RL [12] (or even CQL [13] or GoFAR [14] if we have action labels), and we believe our hierarchical policy structure can still be beneficial in these cases, as our theoretical results do not depend on the underlying value learning algorithm. We will clarify this point in the final paper. We believe studying different underlying offline RL algorithms that can further enhance this idea is an interesting direction for future research. Please let us know if there are any additional concerns or questions. **References**: Please see our global response. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thanks for the response. I appreciate the additional experiments on two pixel-based environments as well as including more baselines into the comparison. I will be sticking to the deserved high score that I have given the paper.
Summary: The paper identifies an issue with offline goal-conditioned reinforcement learning: namely that goal-conditioned value estimation can be noisy, so long-horizon tasks can be difficult to accomplish due to accumulating errors in value estimation. In addition, when states are so close together, there is very little signal for learning as any mistakes can be corrected in future states. Thus, HIQL is proposed, which learns a high-level goal-conditioned policy to predict subgoals (such that there is sufficient learning signal), and a low-level goal conditioned policy to predict actions, which benefits from only considering nearby goals. Experiments show the advantages of this approach with respect to baseline methods in hierarchical and flat offline RL, as well as the ability to use unlabeled (without action) trajectory data to improve learning, and data efficiency. Strengths: - The argument is extremely clear and well exposed - I appreciate the demonstration of issues with current methods (Fig. 2, 3, 4) before introducing the solution, which helps my understanding of the field in general. Such a simple demonstration, though perhaps not exactly well validated, may even be worth more as expository writing than the final proposed method. - The method is simple, in particular reuse of internal representations instead of learning some VAE or something more complex is nice. - The ability to have different data requirements for different parts of the policy is quite nice, given that action labels are harder to come by (Table 3). Weaknesses: - Though the exposition is very clear, sometimes it is a bit repetitive. In particular, the same justification for hierarchy is pointed out in line 36 and 160, as well as the repetition between sections 4.2 and section 5, though I can see how this is a matter of personal taste. - The difference in domain between Figure 2 and Figure 3 is a little bit odd, though this isn't a huge issue. The domain in Figure 3 is maybe a bit _too_ toy, which leads me to discount Proposition 4.1, the major point was already in Figure 2. - I'm a bit suspect of the claim that Procgen Maze is a good demonstration for a visual environment, given that the visual observation is already so structured. It would be nice to work on a more complex task within Procgen, like Coinrun, though I'm unaware of the data availability for these tasks. - Proposition 5.1 is unnecessary in my opinion, that or just a simple proof sketch could be given inline. - I think a more in-depth discussion of particularly related work is merited. Novelty is not very clear to me from the given presentation. In particular the proposed method is so simple that it's surprising to me that it does not already exist. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Besides the points in the weakness section, it's a bit unclear to me the novelty of this submission, so I would appreciate some further contextualization to other hierarchical methods that take advantage of data. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive feedback about this work. * **“Novelty is not very clear to me from the given presentation.”** One major difference between our approach and prior hierarchical methods is that we extract both policies (and even representations) from a **single** (non-hierarchical) value function. This eliminates the need for additional components, unlike previous hierarchical methods that train separate value functions [4, 5] or use potentially complex high-level subgoal planning procedures [6, 7, 8, 9]. Despite its simplicity, perhaps surprisingly, we show that this simple technique can significantly improve performance both in theory and in practice, due to an improved “signal-to-noise” ratio (Section 4). We have clarified this point in both the Introduction and Related Work sections in a revised version of the draft. * **Additional pixel-based environments** To verify the effectiveness of HIQL in more diverse visual environments beyond Procgen Maze, we evaluated HIQL and prior methods on **two** additional pixel-based benchmarks: Roboverse and Visual AntMaze. **Roboverse** [3] is a pixel-based, goal-conditioned robotic manipulation task that requires multi-stage reasoning and generalization, where the agent must learn to control a robot arm to manipulate objects purely from pixels. We use the same dataset and tasks used by Zheng et al. [3]. **Visual AntMaze** is a vision-based variant of the AntMaze environment, where we provide the agent with a $64 \times 64 \times 3$ camera image and its proprioceptive states, excluding the global coordinates. Hence, the agent must learn to navigate the maze based on the wall structure and floor color from the image. Please find illustrations of these environments in **Figure 1** of the supplementary PDF. For the datasets, we render the ‘antmaze-large-diverse-v2’ and ‘antmaze-large-play-v2’ datasets from the D4RL benchmark. We additionally employ a more challenging dataset, ‘antmaze-large-navigate-v2’, which consists of diverse navigation behaviors that visit multiple goal locations within an episode. In these two additional pixel-based environments, we compare the performances of HIQL (ours), goal-conditioned IQL (“GC-IQL”), goal-conditioned POR (“GC-POR”), HGCBC, and GCBC (with individually tuned hyperparameters). We report the results (averaged over 8 seeds, $\pm$ denotes standard deviations) below: | Task | GCBC | HGCBC (+ repr.) | GC-IQL | GC-POR (+ repr.) | **HIQL (ours)** | |---|---:|---:|---:|---:|---:| | visual-antmaze-diverse | $71.4 \pm 6.0$ | $35.1 \pm 12.0$ | $72.6 \pm 5.9$ | $47.4 \pm 17.6$ | $\mathbf{80.5} \pm 9.4$ | | visual-antmaze-play | $64.4 \pm 6.3$ | $23.8 \pm 8.5$ | $70.4 \pm 26.6$ | $57.0 \pm 8.1$ | $\mathbf{78.4} \pm 4.6$ | | visual-antmaze-navigate | $33.2 \pm 7.9$ | $21.4 \pm 4.6$ | $22.1 \pm 14.1$ | $16.1 \pm 15.2$ | $\mathbf{45.7} \pm 18.1$ | | roboverse | $26.2 \pm 4.5$ | $26.4 \pm 6.4$ | $31.2 \pm 8.7$ | $46.6 \pm 7.4$ | $\mathbf{61.5} \pm 5.3$ | The table above shows that HIQL outperforms previous methods in these vision-based tasks as well. Notably, in Roboverse, HIQL is capable of generalizing to solve unseen robotic manipulation tasks purely from images, achieving an average success rate of 62%. We have included these results and experimental details in a revised version of the paper. * **Writing suggestions** Thank you for the suggestions! We will incorporate them into the final version of the paper. Please let us know if there are any additional concerns or questions. **References**: Please see our global response. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarification on novelty, especially as I am somewhat unfamiliar with the prior work. They should take my original comment on surprise at the method being novel as a compliment. Simplicity is always appreciated. I also appreciate the additional visual domain experiments, which are much more convincing to me than the original maze domain. I see that the conclusions in the original paper remain undisturbed. I don't have any additional concerns at this time, and I think the original score of 7 that I gave is merited.
Summary: This paper introduces hierarchical IQL (HIQL) for offline goal-conditioned reinforcement learning. HIQL uses the IQL algorithm to learn both a high-level waypoint policy as well as a low-level action policy; in both case, the goal-conditioned value is provided by the same value function learned using goal-conditioned IQL procedure. HIQL is evaluated on both state-based as well as image-based environments that require hierarchical planning. In all settings, HIQL outperforms prior methods Strengths: Originality: This paper builds on IQL, which has already been extended to goal-conditioned settings, though not in the hierarchical manner this paper proposes. Furthermore, the use of single value function is a nice idea. The didactic examples and theoretical results are also interesting and contribute to the overall story. Therefore, I believe this paper contains enough original components for NeurIPS. Quality and Clarity: This paper is very well written and presented. The experiments are rigorously executed and explained. Significance: Offline GCRL is an emerging and important topic to study, and this paper proposes a conceptually simple and effective algorithm for the setting. Weaknesses: This is a strong paper without major weaknesses. There are several places to further improve the paper: 1. An ablation that demonstrates that AWR is needed for both policies. I suspect that it could be possible that it is not super important for the low-level controller if the horizon is short enough. 2. Connection with existing approaches that learns a high-level planner and uses an inverse dynamics model to back out low-level actions can be better drawn in related work. 3. I believe more substantive error bound in the form of policy performance can be derived in addition to Proposition 4.1; [1] makes one such attempt. [1] Ajay, Anurag, et al. "Opal: Offline primitive discovery for accelerating offline reinforcement learning." arXiv preprint arXiv:2010.13611 (2020). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have listed several suggestions in the section above. Overall, I think this is a solid paper, and I recommend acceptance. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive feedback about this work. As suggested by the reviewer, we ran an additional ablation experiment to study the use of AWR for both policies. * **An ablation that demonstrates that AWR is needed for both policies** We ablated both low-level AWR and high-level AWR by replacing either of them with BC, and evaluated them on both AntMaze, which has a relatively large waypoint step ($k=25$ for ‘-large’ and $k=50$ for ‘-ultra’), and Procgen Maze, which has a relatively small waypoint step ($k=3$). We report the performance below (the results are averaged over 4 seeds and $\pm$ denotes standard deviations). | Task | HIQL (ours) (High AWR, Low AWR) | Ablation (High AWR, Low BC) | Ablation (High BC, Low AWR) | |------------------------|--------------------------|-----------------------------|-----------------------------| | antmaze-large-diverse | $88.2 \pm 5.3$ | $60.1 \pm 13.3$ | $\mathbf{91.3} \pm 5.1$ | | antmaze-ultra-diverse | $\mathbf{52.9} \pm 17.4$ | $23.6 \pm 9.2$ | $47.6 \pm 13.7$ | | procgen-maze-500-train | $\mathbf{82.5} \pm 6.0$ | $74.0 \pm 12.6$ | $17.5 \pm 5.7$ | | procgen-maze-500-test | $\mathbf{64.5} \pm 13.2$ | $52.0 \pm 11.7$ | $19.5 \pm 9.6$ | The table above shows that both high-level AWR and low-level AWR are important for performance. However, their relative importance may depend on the dataset. In AntMaze, low-level AWR is more important than high-level AWR (due to the data collection strategy; the AntMaze datasets consist of single-goal-reaching trajectories with noisy actions, in which case BC is a reasonable objective for the high-level policy but not for the low-level policy). On the contrary, in Procgen Maze, high-level AWR is more important than low-level AWR (due to both the diversity of the dataset and the relatively small waypoint steps). * **Connection with existing approaches that learn a high-level planner and use an inverse dynamics model to back out low-level actions** Thank you for the suggestion. We will discuss these approaches [18, 19, 20, 21, 22] in the related work section of the final version. * **An additional error bound based on suboptimality** Thank you for the suggestion. In an earlier version of the draft, we made initial attempts at deriving a bound similar to that of Ajay et al., but found removing the assumption of two value functions (we only use one) to be difficult. If we are able to successfully derive a hierarchical performance bound based on a single value function, we will include it in the final paper. Please let us know if there are any additional concerns or questions. **References**: Please see our global response. --- Rebuttal Comment 1.1: Title: Thank You for Your Response Comment: Dear Authors, Thank you for your response. I have carefully read it and am satisfied with the new experimental results that justify the design decision of using AWR at both levels. Given the initial high rating, I will keep my original score as I believe that this paper merits acceptance at NeurIPS.
Rebuttal 1: Rebuttal: We appreciate all five reviewers’ constructive feedback and suggestions for improving the work. We would like to highlight the updates we made in our responses below. - We evaluated HIQL and baselines on $\mathbf{2}$ **additional pixel-based environments**, Roboverse and Visual AntMaze, which demonstrate the effectiveness of HIQL in more diverse visual domains (Reviewer cj3e). - We compared HIQL with $\mathbf{3}$ **additional baselines** — goal-conditioned CQL, contrastive RL, and GCPC — on D4RL tasks (Reviewers tvKa and kqhw). - We ablated various aspects of HIQL, which shows the necessity of both high-level AWR and low-level AWR and the necessity of having a high-level policy (Reviewers xrL8 and kqhw). Below are the references we use in our responses: [1] Jiang et al., Efficient Planning in a Compact Latent Action Space. ICLR 2023. [2] Zeng et al., Goal-Conditioned Predictive Coding as an Implicit Planner for Offline Reinforcement Learning. arXiv 2023. [3] Zheng et al., Stabilizing Contrastive RL: Techniques for Offline Goal Reaching. arXiv 2023. [4] Nachum et al., Data-Efficient Hierarchical Reinforcement Learning. NeurIPS 2018. [5] Levy et al., Learning Multi-Level Hierarchies with Hindsight. ICLR 2019. [6] Shah et al., Rapid Exploration for Open-World Navigation with Latent Goal Models. CoRL 2021. [7] Fang et al., Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space. IROS 2022. [8] Fang et al., Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks. CoRL 2022. [9] Li et al., Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning. RA-L 2022. [10] Ma et al., VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training. ICLR 2023. [11] Eysenbach et al., Contrastive Learning as Goal-Conditioned Reinforcement Learning. NeurIPS 2022. [12] Wang et al., Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning. ICML 2023. [13] Kumar et al., Conservative Q-Learning for Offline Reinforcement Learning. NeurIPS 2020. [14] Ma et al., How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression. NeurIPS 2022. [15] Ghosh et al., Reinforcement Learning from Passive Data via Latent Intentions. ICML 2023. [16] Hong et al., Bilinear value networks. ICLR 2022. [17] Chebotar et al., Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. ICML 2021. [18] Torabi et al., Behavioral Cloning from Observation. IJCAI 2018. [19] Schmeckpeper et al., Reinforcement Learning with Videos: Combining Offline Observations with Interaction. CoRL 2020. [20] Baker et al., Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. NeurIPS 2022. [21] Chang et al., Learning Value Functions from Undirected State-only Experience. ICLR 2022. [22] Zheng et al., Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories. ICML 2023. Pdf: /pdf/08415eaae14578685fc71ef5c297392b4f852d7f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes HIQL a hierarchical algorithm for offline goal-conditioned RL. The approach consists in utilizing a single action-free value function to acquire knowledge about the structure and employ two policies: a high-level policy that predicts or represents a waypoint, and a low-level policy that predicts the action required to reach that waypoint. The approach is well explained and motivated. The experiments comparing most state of the art approaches of offline-RL on the main benchmarks seem convincing. Strengths: Reasonably novel approach, well explained and illustrated with convincing results on main offline RL benchmark of locomotion and manipulation against reasonable baselines. Weaknesses: Maybe performing TT on kitchen as baseline would have been useful to improve the comparison part. Otherwhile, no major weakness in my opinion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why not using the TT in the kitchen scenario for comparison ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Nothing noticeable IMO. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive feedback about this work. * **Why not use TT in Kitchen for comparison?** As mentioned in L303, we directly took the performances of the Trajectory Transformer (TT) and Trajectory Autoencoding Planner (TAP) from Jiang et al. [1], where these methods were not evaluated on the Kitchen benchmark. As such, we only made a comparison with these methods on AntMaze-{Medium, Large, Ultra}, which Jiang et al. [1] also used as a benchmark. We chose to use the numbers from prior work directly, since setting up and tuning these Transformer-based methods often requires a substantial amount of time and extensive computing resources. However, to address this comment, we additionally compare HIQL with GCPC [2], a recently proposed offline goal-conditioned RL method that also uses a Transformer to model trajectories similarly to TT. GCPC shows a better performance than TT in AntMaze-Large and reports its performance on goal-conditioned Kitchen (but not on AntMaze-Ultra). We present the results below, where we took the GCPC results from the original paper [2]: | Task | GCBC | GC-IQL | TAP | TT | GCPC [2] | HIQL (ours) | |------------------------|-----------------|-----------------|--------|------------------|-----------------|--------------------------| | antmaze-medium-diverse | $67.3 \pm 10.1$ | $63.5 \pm 14.6$ | $85.0$ | $\mathbf{100.0}$ | $70.8$ | $86.8 \pm 4.6$ | | antmaze-medium-play | $71.9 \pm 16.2$ | $70.9 \pm 11.2$ | $78.0$ | $\mathbf{93.3}$ | $70.4$ | $84.1 \pm 10.8$ | | antmaze-large-diverse | $20.2 \pm 9.1$ | $50.7 \pm 18.8$ | $82.0$ | $60.0$ | $77.2$ | $\mathbf{88.2} \pm 5.3$ | | antmaze-large-play | $23.1 \pm 15.6$ | $56.5 \pm 14.4$ | $74.0$ | $66.7$ | $79.2$ | $\mathbf{86.1} \pm 7.5$ | | antmaze-ultra-diverse | $14.4 \pm 9.7$ | $21.6 \pm 15.2$ | $26.0$ | $33.3$ | - | $\mathbf{52.9} \pm 17.4$ | | antmaze-ultra-play | $20.7 \pm 9.7$ | $29.8 \pm 12.4$ | $22.0$ | $20.0$ | - | $\mathbf{39.2} \pm 14.8$ | | kitchen-partial | $38.5 \pm 11.8$ | $39.2 \pm 13.5$ | - | - | $\mathbf{65.0}$ | $\mathbf{65.0} \pm 9.2$ | | kitchen-mixed | $46.7 \pm 20.1$ | $51.3 \pm 12.8$ | - | - | $61.0$ | $\mathbf{67.7} \pm 6.8$ | The table above shows that, despite its simplicity, HIQL achieves the best (including ties) performance in both antmaze-large-{diverse, play} and kitchen-{partial, mixed}, generally outperforming the more computationally intensive Transformer-based methods. Putting it all together, we hope that the omission of TT/TAP results on Kitchen (which was not evaluated with these methods in prior work) is reasonable. Please let us know if there are any additional concerns or questions. **References**: Please see our global response.
null
null
null
null
null
null
On the Role of Entanglement and Statistics in Learning
Accept (poster)
Summary: This paper studies quantum learning theory, in particular the relationship between the quantum version of PAC learning and the quantum version of the statistical query (QSQ) model, as well as their connection to other considerations in quantum computing, such as entangled measurements and separable measurements. Specifically, the authors proved that 1. For learning Boolean concept classes, the entangled and separable sample complexity are polynomially related (at most quadratic power). 2. There exists a concept class with an exponential separation between quantum PAC learning with classification noise and QSQ learning. Along this, the authors also proposed novel technical tools of quantum statistical query dimension, and these technical results are applied to problems ranging from states learning, distribution learning, and specific problems in quantum computing such as shadow tomography, error mitigation, etc. Strengths: This paper is technically very solid and is able to prove a series of new results in quantum learning theory. PAC learning and SQ learning models are both important concepts in classical learning theory, and it’s nice to see that the authors are able to generalize them to the quantum domain, connect them to natural definitions in quantum, and prove separation results between these concepts. It’s also nice to see that the authors are able to find wide applications in quantum learning theory. Weaknesses: From my perspective, the most notable weakness of this works is its interest to the general machine learning community. There are theory papers in NeurIPS each year, many of which are very interesting and well received by NeurIPS audiences, but I’m afraid that this one falls into too much on the theory side and might be of limited interest to the NeurIPS community. In general, the topics of PAC learning and SQ model are more of theoretical interest and target more on theoretical computer science. The quantum version further delves into those directions and bring into definitions that do not exist in classical machine learning, such as entangled measurements/separable measurements, shadow tomography, etc. As another evidence, the references in the main body contain literally 0 paper coming from top-tier machine learning conferences targeting at general audiences, including, NeurIPS, ICML, ICLR, AAAI, etc. Instead, there are many top-tier theory papers at STOC/FOCS, Journal of the ACM, and top-tier physics journals. In general, I believe that this paper can be quite competitive in top-tier theoretical computer science venues or quantum physics venues, but is out of scope for NeurIPS due to lack of insights for up-to-date trends in current machine learning research. As a minor point of weaknesses, I think the references can be better presented. First, I find it a bit hard to locate references: there are quite a few big brackets citing >5 papers at the same time, and it’s very difficult to determine which paper talks about which topic. For instance, in Page 1, “There have already been many theoretical proposals for quantum algorithms providing speedups for practically relevant ML tasks such as clustering, recommendation systems, linear algebra, convex optimization, SVMs, kernel-based methods, topological data analysis [34, 14, 9, 42, 32, 26, 35, 23, 49, 46]” can be better written as … such as clustering [xx], recommendation systems [xx], linear algebra [xx], convex optimization [xx], SVMs [xx], kernel-based methods [xx], “and” topological data analysis [xx]. Actually, I think here the authors lost some quantum computing papers accepted by past NeurIPS/ICML conferences that provide quantum speedup for solving machine learning problems, such as Kapoor et al. (https://proceedings.neurips.cc/paper/2016/hash/d47268e9db2e9aa3827bba3afb7ff94a-Abstract.html) and Li et al. (http://proceedings.mlr.press/v97/li19b.html) for classification, Arunachalam and Maity (https://proceedings.mlr.press/v119/arunachalam20a.html) for boosting, Childs et al. (https://proceedings.neurips.cc/paper_files/paper/2022/hash/933e953353c25ec70477ef28e45a2dcc-Abstract-Conference.html) for logconcave sampling, etc. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Can the techniques in this paper be applied to prove separation results between quantum and classical PAC learning or SQ models? As far as I see, this paper studies purely quantum learning concepts and their relationships. It would be helpful to deliver more conceptual message about the difference between quantum and classical learning, which may be of general interest (this is also the storyline set at the beginning of intro, but quickly drives into purely quantum things). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: N/A – this work is purely theoretical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! Regarding your question about separating quantum vs. classical techniques: There are known separations between classical and quantum PAC for the distribution-dependent setting, for example for DNF formulas [Bshouty, Nader H., Jackson, Jeffrey C.: Learning DNF Over the Uniform Distribution Using a Quantum Example Oracle; COLT '95]. Similarly, there are separations between classical SQ and quantum SQ witnessed by the parity functions [Arunachalam, Grilo, Yuen: Quantum Statistical Query Learning; CoRR]. Prior to the submitted work, no separation was known between quantum SQ and quantum PAC, which is why we focus on that. Based on your remark, we will improve the discussion of prior results in the introduction and focus on the delivery of the conceptual message to the broader community. Thank you for pointing that out! We value your comment regarding references and will update the introduction accordingly with references to these results in the revision. We will add references and improve their presentation according to your suggestion. On a higher level, our results provide an insight into where to look for quantum advantages in machine learning and what resources this requires—something we feel is of broader interest in the ML community. Thus, it is interesting to characterize the limitations of QSQ, a natural generalization of SQ that models near-term capabilities in quantum hardware. Our work can be viewed as a provable separation between QML with near term hardware and QML with more sophisticated hardware, by introducing techniques to understand the QSQ model better. On the classical side, the SQ model is intimately linked to (local) differential privacy and optimization, in particular, theoretical and practical algorithms fall into the SQ framework [Feldman, Ghazi; On the Power of Learning from k-Wise Queries; ITCS '17]. Theoretical results on PAC/SQ learning and quantum learning theory [Arunachalam, Quek, Smolin: Private Learning Implies Quantum Stability; NeurIPS ‘21] have been published earlier in NeurIPS. We believe that understanding their counterpart in the quantum setting is of interest for the broader ML community, especially those parts of it that have interest in quantum, and that NeurIPS is an appropriate venue for our work. Based on your recommendation, we will improve the discussion of quantum vs classical learning, to set the context for the typical audience at NeurIPS. Due to the lack of space for the NeurIPS submission we didn't delve into this to sufficient depth, but will implement it in the revision. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I would like to thank the authors for the detailed rebuttal, which makes the storyline more complete from my perspective. It would be helpful if the discussions in the rebuttal can be combined into the final version of the paper, since an additional page will be given.
Summary: This paper studies the power of different quantum machine learning models for learning Boolean functions, namely quantum PAC-learning (QPAC) with entangled measurements, QPAC with separable measurements, and quantum statistical query (QSQ). It has two main results. First, it shows that QPAC with entanglement measurements is not more powerful than separable measurements in this task. More specifically, to *exactly* learn an $n$-bit Boolean function class, every learning algorithm using entangled measurements with $T$ copies of the quantum sample $|\psi_c\rangle=2^{-n/2}\sum_x |x,c(x)\rangle$ can be transformed into a learning algorithm using just separable measurements with O(nT^2) samples. Second, it provides an exponential separation between QPAC with noise and QSQ. In the QSQ model, the learner can query the oracle with any observable $M$ (implementable with $poly(n)$ gates) and obtain an estimate of $tr[M\rho]$ within $1/poly(n)$-additive error. It constructs a concept class of $n$-bit Boolean functions that is QPAC learnable with $\eta$-classification noise (i.e., $|\psi_c\rangle=2^{-n/2}\sum_x \sqrt{1-\eta}|x,c(x)\rangle+\sqrt{\eta}|x,1-c(x)\rangle$) in time $poly(n, 1/(1 − 2\eta))$, whereas every QSQ learner requires $2^{\Omega(n)}$ queries. Furthermore, it also provides several applications in shadow tomography, testing purity of quantum state, error mitigation, and learning output distributions of quantum circuits. Strengths: Understanding the power of different quantum learning models is an important research question in quantum learning theory. The results of this paper are quite solid. More specifically, for the first main result about the relation between entangled measurements and separable measurements, prior to this work, we only knew that they are exponentially separated when learning quantum states. This paper focuses on a smaller concept class and shows that they are polynomially related, which is quite surprising. For the second result about measurement statistics, compared to the classical result that separates PAC from SQ due to Blum et al., the construction in this paper is based on degree-2 polynomials, which are more natural. And the proof of this result provides a novel approach to lower bound the complexity of QSQ via the quantum statistical dimension, which is the main technical contribution of this paper. The applications also improve over prior works in several aspects. Furthermore, this paper is well-structured, and most proofs are mathematically sound to me. Weaknesses: This paper may seem quite difficult to follow for people not working on quantum computing. Also, the proof overview section is too technical, and more intuition should be given. The relation between entangled measurements and separable measurements is $O(nT^2)$, which seems to be not tight. (It has already been pointed out in the paper) Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In Sec. 3.2.1, it could be better to provide the classical definition of the statistical dimension and compare it to the quantum one. * This paper considers proper learning. What about quantum improper learning? The input is still the quantum sample state for some unknown Boolean function, but the goal is just to output some quantum state close to the given quantum sample state. Are the techniques in this paper still apply to this setting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are stated. Potential negative societal impact does not apply here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very interesting questions/comments. Indeed, our bound on $O(n T^2)$ is suboptimal, but in our main theorem statement we have a slightly combinatorial parameter $\eta$ and we show a bound of $O(n T \eta)$ which we show is tight for a certain concept class. We suspect that the right relation should be $O(nT)$ and leave it for future work. Regarding proper versus improper: our results do $\textit{not}$ assume that the learning algorithms are proper. In particular, our lemma bounding learning complexity with decision complexity holds for both proper and improper learners. Then, of course, for decision problems there is no concept of proper/improper learning. In our revision we will make it clear that our lower bound holds for even improper learners. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I keep my score.
Summary: This submission investigates the relationship between learning models with access to entangled measurements, separable measurements, and statistical measurements in the quantum statistical query (QSQ) model. The authors make several notable contributions: they establish the polynomial relationship between the sample complexity of entangled and separable measurements for learning Boolean concept classes, demonstrate an exponential separation between quantum PAC learning with classification noise and QSQ learning, introduce the concept of quantum statistical query dimension (QSD) to provide lower bounds on QSQ learning, prove exponential QSQ lower bounds for various tasks, show an unconditional separation between weak and strong error mitigation, and derive lower bounds for learning distributions in the QSQ model. These contributions advance our understanding of quantum learning theory and have implications for the development of quantum learning algorithms. Strengths: The paper successfully addresses multiple open questions regarding the capabilities of diverse quantum learning models. The insights gained from the obtained results offer a valuable understanding of the potentials and limitations of near-term quantum machines in comparison to fault-tolerant quantum computers. Additionally, the authors leverage the findings from the QSQ model to enrich our comprehension of crucial aspects in the field of near-term quantum machines, such as error mitigation and distribution learning. The theoretical contributions made in this work play a vital role in advancing quantum learning theory and provide practical guidance for developing and optimizing quantum algorithms on both current and future quantum hardware. Weaknesses: According to the whole main text, I did not find the main weakness of the submission While I did not review the entire proof, the portion I examined did not reveal any apparent issues. However, I did notice some minor concerns such as typos and incorrect notations. I suggest that the authors thoroughly examine their work to address these issues, ensuring a self-consistent and easily understandable manuscript. For instance: 1. Line 346, 'a important' 2. Line 375, '?, (ii)' 3. Line 41, Supplementary: '$\sum_{x\in D_0}$' 4. Line 44, Supplementary: '$I(D_0(x)\leq D(x))$' 5. Line 123, Supplementary: 'a n-bit' 6. Line 602, Supplementary: 'Theorem ??' Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: It is recommended that the authors address the issues outlined in the Weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have acknowledged the limitation by discussing open questions in the Discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We will definitely address these and also look over the document for the next revision.
Summary: This paper considers the task of learning an unknown concept class with quantum accesses. In particular, the authors make a comprehensive comparison of the settings with entangled measurements, separable measurements, and statistical measurements in the quantum statistical query (QSQ) model, respectively, and showed that entangled measurements are at most polynomially more powerful than separated measurements, which are exponentially more powerful than QSQ learning. Notably, this second separation is a quantum analog of the classical result separating between classical SQ learning from classical PAC learning. The authors also discuss possible extensions and applications of their result. Strengths: This paper provides a clear answer on the possible role of entanglement the quantum PAC learning setting, and establishes a distinct separation between quantum PAC learning and QSQ learning, which provides an interesting conceptual message. The technical contribution of this paper is solid, and the presentation is clear. Weaknesses: The two main results of this paper are not closely related from my perspective. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Following my previous point, could you please elaborate more on the connection between the two settings? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. While the proof techniques seem very different in both these results, we believe that both the results are related within the theme "need for entanglement in learning". Previously, it was known that QSQ measurements >= Sep measurements >= Ent measurements for all learning tasks. Our work aims to solidify the understanding of the power of the three models and show that Sep and Ent are polynomially related and QSQ measurements are exponentially weaker than Ent measurements. In our revision, we will keep your comment in mind to make our paper look more cohesive. Thank you very much for pointing this out! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the explanation. I remain my rating.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
NeuroGF: A Neural Representation for Fast Geodesic Distance and Path Queries
Accept (poster)
Summary: The paper proposes a neural implicit representation of a 3d surface that enables querying for geodesic lengths and paths between points on the surface. A neural network is overfitted to one given surface. The input query points are embedded into high dimensional features whose euclidean distance represents the desired geodesic distance. Another branch learns to receive the high dimensional features to deform a straight line to the geodesic between the two points. A final branch learns to represent the SDF of the surface. The network is trained via strong supervision w.r.t given geodesics computed on the mesh. The results are shown to produce high accuracy when overfitted to well-known graphics models. Strengths: The idea of overfitting to represent a geodesic paths can be quite useful, especially considering the ability to parallelize many queries through the network as one batch. Weaknesses: I find the work to provide a good technical solution to the given problem, however I cannot champion the paper: The technical contribution is quite limited. Beyond the core idea of overfitting to geodesics of a given shape, the rest of the approach is quite straightforward. As such this feels more as an application than a NeurIPS paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why use the SDF at all? why not provide the first 2 branches + the original mesh as the full representation? if the SDF is completely decoupled, what good does it bring. 2. is there any guaranteed that the geodesic paths actually lie on the isosurface of the SDF? if they are not, how is the difference reconciled? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Rebuttal to Reviewer Ujtd]** ### **[W1]** *Limited technical contribution due to the quite straightforward approach.* **Response:** Thanks for your recognition of the usefulness of our core idea and technical solution. **Still, we beg to differ with the judgment that our work shows limited contribution only because our technical implementations are quite straightforward.** On one hand, as the very first attempt to adapt the neural implicit modeling paradigm for geodesics representation, we resort to a concise yet highly effective learning framework to verify the potential of such a completely new geodesics representation paradigm. On the other hand, the formulation of the shortest path as a discrete sequence of ordered curve points requires an appropriately designed learning structure, and our corresponding adaptation of previous "folding-style" 3D shape reconstruction decoders [14, 43] to a 1D version for curve deformation also has its technical value and thus is not quite straightforward as the reviewer commented. Besides, we also refer the reviewer to our extensions of generalizable NeuroGFs (asked by Reviewer yBGS), as given in Figure R1 and Table R4 of the uploaded PDF file as well as **our responses to Reviewer yBGS [W1]**. These explorations are also of significance. In general, we believe that this work does bring novel insights and meaningful technical contributions for the problem of geodesics representation and computation and thus qualified for the NeurIPS research community. ### **[Q1]** *Necessity of the SDF learning branch.* **Response:** There seems to be a **misunderstanding** of our approach. Maybe our descriptions in lines 180-185 of the paper is not clear enough. In fact, during the testing phase, we can say that the SDF learning branch is completely decoupled. However, during the training phase, since the query points for both SDF and geodesics share the same FC layers for feature embedding, the learning process of the SDF branch will have an impact on the other branches. And in our ablation studies (Table 3 of the paper), the effectiveness of the SDF learning branch has been validated (i.e., removing the SDF learning branch causes performance degradation of geodesics). Moreover, as discussed in **our response to Reviewer yBGS [W5]**, the learning of SDF also benefits the convergence speed of the geodesics learning process. ### **[Q2]** *Difference between the geodesic paths and the iso-surface of the SDF.* **Response:** As both quantitatively and qualitatively illustrated in the paper, the generated shortest paths are close enough to the underlying iso-surface, meaning that we can conveniently deduce post-processed shortest paths whose curve points are exactly located on the surface by straightforward local projection. Here, we simply implement this process by locally sampling surface points and then performing nearest-neighbor matching for the raw outputs of curve points. As shown in Table R2 of the uploaded PDF file, **such a post-processing procedure consistently brings further accuracy improvement for the prediction of shortest paths on ALL the testing meshes**. On average, the error decreases from the original $1.25 \times 10^{-2}$ to $1.09 \times 10^{-2}$. --- Rebuttal Comment 1.1: Title: Looking forward to receiving your feedback Comment: Dear **Reviewer Ujtd** Thank you for taking the time to review our submission. As the discussion phase between the reviewers and authors is coming to the end, may we know whether there are still unsolved concerns from you? We are pleased to address them. Looking forward to receiving your feedback. Best regards, The authors
Summary: The authors propose a solution to the estimation of the geodesic distance between a pair of points (source, target) on a mesh. The proposed approach relies on an implicit field. In particular, the implicit field learns/memorizes the distance between each pair of points. The authors sample a subset of mesh vertices and pre-compute the geodesic distance between them. These points are later used to optimize the network weights (implicit field). Each point is converted into a feature vector by a shared MLP. The difference between source and target features is used to predict the geodesic distance between the two points. On top of this, a second branch predicts the implicit field representing the shape itself (using the shared MLP features as input). Finally, a third branch using the same features, as concatenation, predicts the geodesic path connecting the source to the target as a set of points. The authors demonstrate the effectiveness of the paper on a handful of meshes, showing numerical and qualitative results, while comparing with other approaches. Strengths: The paper addresses an important problem. The manuscript describes the idea intuitively, clearly, and in a well-structured manner. The solution is interesting, original, and well-crafted. It leverages past approaches to achieve a different goal. Furthermore, the method is incredibly faster than chosen baselines while being accurate - both in terms of geodesic distance, geodesic path, and reconstruction error. This is possible with the use of NN, as it requires a single forward pass, compared to classic methods which require extensive computations. Weaknesses: The proposed method uses a shared MLP to predict a set of features from 3D points. However, the input is not limited to points belonging to the surface. The same is the output of the geodesic path (one of the branches). This seems to be quite an important limitation which may severely prevent the application of this method. Indeed, the authors introduce an extra loss term 5) to "force" the points on the curve to belong to the surface. Yet there is no guarantee this happens. Furthermore, to apply this technique we have to optimize a neural network, which takes time while the classic methods do not require such expensive pre-processing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The solution is interesting and leaves me with some questions, it would be great if the authors can address some of these: - how much time does the pre-processing time require? ie optimizing the network + sampling points + estimating distances for training - the current formulation for the path relies on AtlasNet/Foldingnet, have the authors thought about using a different approach that cannot predict points off the surface? For example, the use of a transformer that predicts the index of the next point on the path. - how well does the method scale with very large meshes? (#V > 1M) - how robust is the method on sparse meshes where very few vertices can be used to define geodesic distances? For example, on a cube where the only vertices are at the corners, and the query points are pairs on the faces (assuming this is possible). I thank the authors for the time spent addressing any of these concerns. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no negative social impacts of this work. The authors discussed the limitation of this work, although they did not detail properly the time requirement of the method (such as pre-processing time). I would recommend the authors add these details either in the main paper or in the supplementary. Furthermore, it would be great if the authors could evaluate the approach on a very diverse set of meshes, such as a cube and a super-dense version of "dragon". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Rebuttal to Reviewer nwTQ]** ### **[W1]** *1) The input is not limited to points belonging to the surface; 2) The output geodesic path is not guaranteed to lie on the surface; 3) Have to optimize a neural network for each new input.* **Response:** For the first issue, we argue that it is not an influencing factor in practice, because users will naturally input paired query points that are either vertices of the target mesh model or sampled from the mesh surface. Thus, it would be unnecessary to impose restrictions on the input geodesic query points. For the second issue, as both quantitatively and qualitatively illustrated in the paper, the generated shortest paths are close enough to the underlying iso-surface, meaning that we can conveniently deduce post-processed shortest paths whose curve points are exactly located on the surface by straightforward local projection. Here, we simply implement this process by locally sampling surface points and then performing nearest-neighbor matching for the raw outputs of curve points. As shown in Table R2 of the uploaded PDF file, such a post-processing procedure consistently brings further accuracy improvement for the prediction of shortest paths on ALL the testing meshes. On average, the error decreases from the original $1.25 \times 10^{-2}$ to $1.09 \times 10^{-2}$. For the third issue, as can be found in our below response to your question [Q1], although our approach is an overfitting process, its convergence speed is fast. Moreover, in terms of the overfitting paradigm, NeuroGF inherits from previous works like SEP [13] (C. Gotsman, et al., SIGGRAPH Asia-22'), which can be viewed as the process of "compressing geodesics". As summarized in [13], this setting well suits many interactive applications where rapid computation of arbitrary point-to-point geodesics on very large meshes is required. ### **[Q1]** *Time cost of the whole pre-processing process (including network training).* **Response:** The time cost statistics of the data pre-processing procedures (not including network training) have already been provided in Table S1 of the supplementary material, and detailedly discussed on page 3, lines 15-33 of the supplementary material. As for the network training, practically, since NeuroGF is an offline overfitting process, users can flexibly control the trade-off between training time and fitting accuracy. Typically, the training time cost for achieving comparable performances as reported in the paper is "minute-level". Below we provide statistics averaged on all the 10 testing models, showing that the convergence speed is fast. For example, it only takes less than half a minute to reach the MRE lower than 3\%. | MRE | <3\% | <2\% | <1\% | | :----: | :----: | :----: | :----: | | **Training Time** | 0.4min | 1.7min | 7.9min | From our experience, in most cases, the whole training effects will saturate within 10 minutes. Further extending the training process can only bring marginal performance gains at a slow pace. ### **[Q2]** *Think about different approaches that cannot predict path points off the surface.* **Response:** Thanks for your valuable comments and insightful consideration. For the current learning framework implemented in a generative fashion, it is basically impossible to ensure that all generated points exactly lie on the surface. We agree that predicting the next point along the path might be a viable scheme worthy of further exploring. Yet, considering the limited time for rebuttal and the complexity of designing a completely new framework, we cannot conduct systematic explorations for this issue. We may leave it as a promising direction for future study. ### **[Q3]** *How well does the method scale with very large meshes (\#V > 1M).* **Response:** Following your instructions, we added more diverse experiments on very large-scale meshes. Here, we experimented with a much denser version of the *dragon* model with 1.5M vertices and another classic graphics model *lucy* with 6.9M vertices. As reported in the Table below, our approach works well on these two highly dense mesh models. | Mesh | \#V (M) | MRE (\%) | Chamfer-$L_1$ ($\times 10^{-2}$) | | :----: | :----: | :----: | :----: | | dragon | 1.5 | 0.70 | 1.335 | | lucy | 6.9 | 0.58 | 1.404 | Moreover, we further performed evaluations on three large-scale real-scanned meshes covering (a) indoor room, (b) urban scene, and (c) street view, which are from diverse dataset sources. As shown in Figure R2 of the uploaded PDF file, our approach still achieves satisfactory performances (with MRE rates of 0.37\%, 1.14\%, and 0.23\%, respectively) on the three scene-level meshes with million-scale amounts of vertices. ### **[Q4]** *Robustness to extremely sparse meshes with very few vertices (e.g., a cube model with only vertices at corners).* **Response:** As you suggested, here we add an experiment with a cube model, with only 8 vertices at corners and 12 triangles. The total number of available ground-truth pairs is 28. The testing data are prepared by performing decimation on the cube to deduce a much denser mesh, on which we calculated enough pairs of geodesic data for performance evaluation. It turns out that our approach fails under this extreme case, achieving about 70\% MRE rate. Yet, we argue that in practice we can conveniently perform necessary mesh decimation to enable preparing more ground-truth pairs. For example, when we decimate the cube model to about only 100 vertices for ground-truth preparation, the resulting performance drastically improves to 5.24\% MRE. In addition to this extreme case, we would also refer the reviewer to our response to Reviewer oZNk [W2] for our results when the ground-truth data are sparse. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time respond to my concern in well a structered manner and the additional evaluation. ### W1 As the shortest path may not belong to the mesh, you are suggesting to perform a local projection. This may solve the problem, or introduce artifacts in concave regions. Either way, the numerical results suggest the error (10^-2) is relatively high compared to what I would agree to be accurate predictions. ### Q1 Thank you for the comprehensive response. Overall a single query may take a long time, even hours, (preprocessing of 21 hours from Table S1 + 119 minutes), this time being amortized with multiple queries. While for a mesh-based approach, each query is computed at runtime. A plot showing #query vs total training time would show the benefits of this approach. I struggle to see where this approach (for SSAD or MSAD) would be applicable, can authors cite any applications where a 10^-2 error in the geodesic error would be acceptable? ### Q3 Thank you for the additional evaluation. ### Q4 Thank you for your response and additional evaluation. I understand with sparse GT samples the accuracy degrades, this is peraphs a limitation worth mentioning in the manuscript. --- Reply to Comment 1.1.1: Comment: **Response:** We sincerely appreciate your evaluation of our rebuttal materials and the active discussion. Here, we further respond to your raised questions with more targeted explanations. In terms of downstream application scenarios, it is worth clarifying that the geodesic representation accuracy achieved by our approach is adequate for supporting a rich variety of geometry processing and modeling tasks. Note that the popular classical geodesic algorithms (e.g., *heat method* (HM) and *fast marching method* (FMM)) typically produce results with higher than 1\% errors (sometimes even much higher for anisotropic meshes). Hence, to answer your question about "*applications where a $10^{-2}$ error in the geodesic error would be acceptable*", basically many papers where (original implementations or variations of) HM and FMM are used to produce geodesic information can be taken as examples. As listed in the following, [A, B, C] involve HM-based and FMM-based approaches for geodesic computation, and [D] simply resorts to constructing k-NN graphs followed by the Floyd’s shortest path algorithm, which leads to even higher errors. Also, it can be used for supporting various interactive geometry processing and modeling tasks, such as texture transfer, decal placement, remeshing, smoothing, splines on surfaces, as reported in [E], in which the authors adopted a low-dimensional Euclidean embedding for computing geodesics with approximation error around 5\%. -- [A] A. Poulenard, et al., "Multi-Directional Geodesic Neural Networks via Equivariant Convolution," in ACM TOG, 2018. -- [B] B. L. Bhatnagar, et al., "Multi-Garment Net: Learning to Dress 3D People from Images," in Proc. ICCV, 2019. -- [C] R. Wiersma, et al., "CNNs on Surfaces using Rotation-Equivariant Features," in ACM TOG, 2020. -- [D] Z. Li, et al., "Geodesic Self-Attention for 3D Point Clouds," in Proc. NeurIPS, 2022. -- [E] D. Panozzo, et al., "Weighted Averages on Surfaces," in ACM TOG, 2013. Moreover, we would like to further explain the actual application scenarios of our approach in an overfitting paradigm. To facilitate the intuitive understanding of what we say "interactive applications with rapid and extensive geodesics querying", here we illustrate a very specific case. In the practical industrial scenarios for game/movie/animation, many important 3D asset models such as characters, animals, or other types of objects, need to be repeatedly manipulated, with extensive querying of geodesics at each time. In this case, it can be highly inefficient to apply conventional computational algorithms for many times, and it is even more impractical to store all pre-computed geodesics for every pair of source-target points due to extremely huge memory cost. For this situation, our NeuroGFs can serve as a good choice. Intuitively, once the offline training (i.e., overfitting) process is completed, the neural network model can be viewed as "an attribute" of the target mesh model, just like other attributes like texture. This trained neural network model is quite compact (around 1MB), fast to online query, and can be permanently stored together with the original mesh model as "a compression of its complete geodesics information". Besides, as for your comment that suggests showing the plot of \#query vs. total training time, we remind that the offline training time cost is not influenced by the number of online queries, because in our implementation we aim to learn complete geodesics of the whole mesh model, regardless of how many queries the users may specify. This setting is reasonable according to the preceding paragraph explaining our suitable application scenarios. More importantly, even if in some cases training time cost is really the most critical consideration factor, one can have multiple choices to deal with the speed-accuracy trade-off: -- ***(a) Preparing sparser ground-truth pairs.*** As reported in Table S1 of the supplementary material as well as Table R3 of the uploaded PDF file, one can choose to generate sparser ground-truth pairs to significantly reduce the time cost of pre-processing procedures, and the resulting geodesic representation accuracy can still maintain satisfactory. -- ***(b) Early stopping of the training process.*** As can be found in our rebuttal response to your [Q1], the fast convergence speed of our NeuroGFs allows us to flexibly control the training time cost while maintaining highly competitive performances. -- ***(c) Adopting generalizable NeuroGF learning frameworks.*** As designed and comprehensively evaluated in our rebuttal (Figure R1 and Table R4 of the uploaded PDF file), one can directly use the generalizable version of our NeuroGF representation framework. After training, the generalizable NeuroGFs can be directly applied to process unseen shapes and categories while achieving satisfactory performances.
Summary: This paper presents a framework to effeciently compute pairwise geodestic distances and shortest geodesic paths on the surface of a given mesh. This is accomplished by overfititng a neural learning model to a given mesh to regress the distances, paths and a SDF to reconstruct the surface. The geodesic distances/paths are learned in an implicit manner given pairs of points on the surface - specifically, points are sampled along the line between the points and the corresponding points on the geodesic path are predicted by a neural network. Strengths: The problem being tackled, namely of effecient geodesic computations is important for the graphics community. The method is simple and easy to understand and reproduce. Weaknesses: 1. In my opinion, the overfitting setting weakens the contribution significantly, especially considering that GraphWalks [Potamias et al] performs this task with a dataset and can predict even if with lower accuracy, on unseen test shapes. This work can be extended to learn on datasets, for eg simply by learning a global shape descriptor and appending it to the existing learned point features. 2. Since its in the overfitting sitting, the results are less impressive. Neural networks are known for the ability to interpolate between seen data - i.e, if the model is trained to predict geodesics between enough number of pairs, its not too difficult to then get a good performance on the remaining pairs at inference. 3. Additionally, since it is in the overfitting setting and thus needs to be trained for every new mesh, the training time itself should be considered as part of the time taken to compute the geodesics. The training time for each mesh is not mentioned in the paper. 4. The network takes in point coordinates (via FC layers) and thus the neighborhood of a point is not encoded in any way - a better representation would be to encode as well the immediate neighborhood of the given pair of points. However, this has not been tried in the paper. 5. The role of the SDF is simply as a byproduct, as noted by the authors in the supplemental. Table 3 shows that adding the SDF only provides a small improvement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The computation of the Distrubution constraint (eq 11) is not clear - the text says that a separate network is trained to predict the SDF of the generated curve poitns P_m, and that it is trained in advance and kept fixed. However, P_m itself is computed during training (from the line sampels), so how is the network trained in advance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: As discussed in the weakness section, the primary limitation is that the method has to be trained for every given mesh, thus I'd consider the training time as part of the geodesic computation time. I'd suggest the authors to extend the method to learn on datasets with a global shape descriptor to make it more useful for the community. In my opinion, the overfitting setting weakens the result and is not as useful. The work would be very useful if it can be used to perform online computations on unseen shapes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Rebuttal to Reviewer yBGS]** ### **[W1]** *Weakened contribution due to the overfitting setting; Extension to learn on datasets.* **Response:** Thanks for your insightful advice, yet we beg to differ with the judgment that the overfitting setting "significantly" weakens our contribution. In terms of the overfitting paradigm, NeuroGF inherits from previous works like SEP [13] (C. Gotsman, et al., SIGGRAPH Asia-22'), which can also be viewed as the process of "compressing geodesics". As summarized in [13], this setting well suits many interactive applications where rapid computation of arbitrary point-to-point geodesics on very large meshes is required. Still, we do agree with the reviewer's advice that it is very promising to extend NeuroGFs to be generalizable models. We made efforts to such extensions and performed comprehensive experimental evaluations. As shown in Figure R1 of the uploaded PDF file, we designed three versions of generalizable NeuroGFs using (a) *autodecoder-based*, (b) *point transformer-based*, and (c) *graph convolution-based* feature extraction strategies. Notably, NeuroGF equipped with (a) or (b) for shape encoding can directly work on point clouds during testing. These modifications are technically straightforward since we only need to replace the query point embedding stage with a feature-conditioned process. We used the popular ShapeNet mesh dataset pre-processed by DISN (Q. Xu, et al., NeurIPS-19'), which covers 13 different shape categories. We collected 3000 models from 8 categories as our training set. For each model, we only sparsely generated 2K ground-truth training pairs. For evaluation, we constructed different testing sets: 1) SN-Airplane, SN-Chair, and SN-Car are collected within the same categories of *airplane*, *chair*, and *car*, each of which containing 500 models; 2) SN-8x50 is collected from the same 8 categories as in the training set (each category with 50 models), but each shape is unseen during training. 3) SN-5x50 is collected from the other 5 different categories (each category with 50 models). Results are provided in Table R4 of the uploaded PDF file. The testing results on (a) validate the category-specific representation capability (with about 3\% MRE). The testing results on (b) show that our extended approach equipped with a powerful deep point encoder works well on point clouds for both seen and unseen categories. And (c) further incorporates mesh connectivity cues, thus achieving better performance. ### **[W2]** *Not difficult to obtain good performance with enough number of training pairs.* **Response:** Please refer to our detailed responses to Reviewer oZNk [W1] and [W2]. Briefly speaking, the actual ratio of "our used training pairs" to "all pairs between vertices of the original mesh" for geodesics ground-truth preparation is typically small, and our approach can still maintain relatively satisfactory performances even with much sparser (e.g., thousands of) ground-truth training pairs. ### **[W3]** *Time cost of training NeuroGF for overfitting each new mesh.* **Response:** Practically, since NeuroGF is an offline overfitting process, users can flexibly control the trade-off between training time and fitting accuracy. Typically, the training time cost for achieving comparable performances as reported in the paper is "minute-level". Below we provide statistics averaged on all the 10 testing models, showing that the convergence speed is fast. For example, it only takes less than half a minute to reach the MRE lower than 3\%. | MRE | <3\% | <2\% | <1\% | | :----: | :----: | :----: | :----: | | **Training Time** | 0.4min | 1.7min | 7.9min | From our experience, in most cases, the whole training effects will saturate within 10 minutes. Further extending the training process can only bring marginal performance gains at a slow pace. ### **[W4]** *Encoding neighborhood of given pair of query points.* **Response:** Query point encoding is indeed an important issue, as presented in our response to Reviewer Racd [Q1] and our newly-added ablation study (variant (1) in Table R1 of the uploaded PDF file) of adding NeRF-style position encoding to point coordinates. However, in our processing pipeline, the technical soundness of encoding neighborhood of query points migh be questionable. This is because, in real application scenarios, users may randomly specify query points with varying density and distribution. In an extreme case, if a user only specify a single pair of query points, the neighbor aggregation process would fail. Hence, we still adopt our original design. ### **[W5]** *Role of the SDF learning branch.* **Response:** Essentially, we include the SDF branch with the goal of forming a unified representation for encoding both 3D geometry and geodesics information, and it also brings actual benefits. In addition to the boosting effect (though not very significant) as verified in our ablation studies, the learning of SDF also speeds up the overall convergence speed of geodesics branches. The average training time needed for reaching MRE lower than 3\% and 2\% will respectively increase to 0.9min and 2.4min if we remove the SDF branch. ### **[Q1]** *How the separate SDF network is trained for the distribution constraint.* **Response:** It seems that there exists a misunderstanding of this learning procedure. Specifically: - We remind that the notation $\mathcal{N}_\phi$ used in Eq. (11) is a separate network for SDF fitting. - This separate SDF network is different from the notation $\mathcal{N}_\Theta$ as given in Eq. (1). It will cause confusion if mixing them up. The separate SDF network $\mathcal{N}_\phi$ is not optimized on the generated curve points $\mathbf{p}_m$. Instead, it is optimized on randomly sampled spatial queries in advance. After finishing training, we freeze its parameters and apply it to infer SDF values of the generated curve points $\mathbf{p}_m$. --- Rebuttal Comment 1.1: Comment: Dear **Reviewer yBGS** Thank you for taking the time to review our submission. As the discussion phase between the reviewers and authors is coming to the end, may we know whether there are still unsolved concerns from you? We are pleased to address them. Looking forward to receiving your feedback. Best regards, The authors
Summary: This paper develops a neural network architecture to estimate the geodesic distances and shortest geodesics between query points on a given 2D surface. It also provides a signed-distance function field evaluated at the given query points. The architecture consists of a set of FC layers followed by Pointwise MLPs. The learning process is supervised. That is, the output of the network is compared with the ground truth distances and geodesic paths. The paper demonstrates these ideas for a number of 3D surfaces and compares the results with two previously published approaches. The results are found to be more accurate and take less computing time. Strengths: There are many techniques in shape analysis of 3D objects that require computing geodesics or geodesic distances between arbitrary points. This paper develops a learning based solution that provides a slightly more accurate estimates and a somewhat faster pace. Weaknesses: In my opinion this problem is not that challenging. The paper uses about 20K points on a surface and generates ground truth quantities (geodesic distances, geodesics, etc) from them. This seems to me like a very dense sampling. If on a 2D surface we have pairwise distances between 20K (or some such number) points, then the task of finding distances between remaining arbitrary pairs does not seem that challenging. Indeed, the errors across different methods are not that different – mostly within a one percentage point or so. In terms of regression, one is learning a function f: S x S --> R_+ using millions of data points for the geodesic distance estimation. How about training the algorithm it only hundreds or even thousands of paired distances. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The conclusion states that the paper learn “neural implicit representations for 3D surface geodesics”. I do not quite understand what this means. Is there a mathematically precise statement that can replace this ambiguous claim. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As the authors state, there is no guarantee that the shortest path lies on the actual surface. This seems like a big limitation. How much error does the post-processing introduces in this? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Rebuttal to Reviewer oZNk]** ### **[W1]** *Very dense sampling of ground-truth training data.* **Response:** There seems to be a misunderstanding about preparing ground-truth pairs for training. In our experiments, we will create a simplified version of mesh model with around 20K vertices if the original mesh is of larger scale (note that the testing data are still obtained from the original input mesh). Then as presented in the supplementary material (page 3, lines 17-19) we compute and preserve paired ground-truth geodesics between 1024 source points and 4096/2048 target points (4096 for geodesic distances, 2048 for shortest paths), rather than all pairs between the 20K vertices. The proportions of "our used training pairs" to "all pairs between vertices of the original mesh" for geodesic distance ground-truth preparation are given below. We can see that the ratio of our used pairs is very small, except for the sparse *nail* model. | Mesh &#124; | armadillo | bimba | bucket | bunny | cow | dragon | fandisk | heptoroid | maxplanck | nail | | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | **Ratio** &#124; | 0.03% | 0.15% | 0.69% | 0.69% | 0.40% | 0.004% | 2.10% | 0.01% | 0.35% | 71.2% | Moreover, we did experiment with different numbers of ground-truth training data, as comprehensively evaluated in Table S1 of our supplementary material, where the second row (\#Sources=1024) corresponds to the setup we used in the paper. We can observe that the resulting performance (MRE of 2.21\% on the challenging dragon model) can still be competitive even when we only use 32 source points (the last row). ### **[W2]** *Training with highly sparse ground-truth pairs.* **Response:** To better address your concern, in addition to the experiments conducted in Table S1 of our supplementary material, we further explored much sparer settings of paired ground-truth preparation. Due to limited time, here we only used five testing meshes *armadillo*, *bunny*, *cow*, *dragon*, and *nail*, and listed the average performances in Table R3 of the uploaded PDF file. When training NeuroGFs with only 8K and 2K ground-truth pairs, the errors still maintain relatively satisfactory. Besides, please also refer to our newly-added experiments (Table R4 of the uploaded PDF file) of extending NeuroGFs to be generalizable learning models (as asked by Reviewer yBGS), where we only prepared about 2K ground-truth pairs for each training mesh. ### **[Q1]** *Ambiguous meaning of neural implicit representations for 3D surface geodesics.* **Response:** The concept of "neural implicit representation" has been popular in recent years, especially for representing 3D signals, such as DeepSDF (CVPR-19'), NeRF (ECCV-20'), and their numerous follow-up works. Generally, corresponding to "implicit representation", traditional 3D data structures like meshes, voxels, and point clouds are known as "explicit representation". That is, they have a finite number of explicitly stored data elements, like vertex/triangle, voxel occupancy status, and spatial point. Differently, neural implicit models tend to represent the target signal with infinite resolutions in a "query-and-answer" fashion. For example, given a certain 3D query point position, we feed it into a neural network, which outputs the scalar value of its signed distance. Thus, by densely sampling query points and collecting their outputs, we can flexibly reconstruct the surface geometry from the signed distance field with arbitrary resolution (depending on the number of queries). In our case, the formal mathematical description of the proposed "neural implicit representations for 3D surface geodesics" has already been given by Eq. (1) in the paper. Specifically: - The input query is a pair of source and target points ($\mathbf{q}_s$;$\mathbf{q}_t$) located on the underlying surface. - The neural model outputs a scalar value of geodesic distance $d$ and a sequence of shortest path points $\mathbf{c}_{s \rightarrow t}$. ### **[L1]** *1) No guarantee that the shortest paths lie on the surface; 2) Effects of post-processing.* **Response:** As both quantitatively and qualitatively illustrated in the paper, the generated shortest paths are close enough to the underlying iso-surface, meaning that we can conveniently deduce post-processed shortest paths whose curve points are exactly located on the surface by straightforward local projection. Here, we simply implement this process by locally sampling surface points and then performing nearest-neighbor matching for the raw outputs of curve points. As shown in Table R2 of the uploaded PDF file, such a post-processing procedure consistently brings further accuracy improvement for the prediction of shortest paths on ALL the testing meshes. On average, the error decreases from the original $1.25 \times 10^{-2}$ to $1.09 \times 10^{-2}$. Hence, we beg to differ with the judgment that "This seems like a big limitation". --- Rebuttal Comment 1.1: Title: Looking forward to receiving your feedback Comment: Dear **Reviewer oZNk** Thank you for taking the time to review our submission. As the discussion phase between the reviewers and authors is coming to the end, may we know wether there are still unsolved concerns from you? We are pleased to address them. Looking forward to receiving your feedback. Best regards, The authors
Rebuttal 1: Rebuttal: ### **[Global Response]** We sincerely thank all five reviewers for their time and efforts in reviewing this paper and providing different aspects of valuable suggestions and helpful comments. In summary, during the rebuttal period, we made the following major efforts to address reviewers' concerns. The critical contents and results will be included in our paper or supplementary material to further improve the quality of our paper. - We experimented with additional testing meshes with more diverse types and larger scales. - We conducted more ablation studies to explore different choices of our specific technical implementations. - We demonstrated the effectiveness of our NeuroGF when trained with a limited amount of ground-truth pairs. - We introduced the post-processing procedures for making the generated shortest path points lie exactly on the underlying surface, which can further boost the representation accuracy. - We conducted extensions of generalizable NeuroGF learning frameworks and evaluated their effectiveness on the popular and widely-used ShapeNet dataset. - We clarified/presented the statistics of network training (as well as other pre-processing procedures) time cost. - We explained the goal, necessity, and effects of adding the auxiliary SDF learning branch. - We carefully answered reviewers' questions about some confusions/misunderstandings of our approach and some setups/procedures to facilitate better assessments of this work. For convenience, below we will briefly summarize each raised Weakness (W), Question (Q), and Limitation (L), and provide our response item by item. Note that there is also a one-page PDF file uploaded for presenting more figures and tables. Pdf: /pdf/c4076885909d74da2b2e1a5dc91255885089b285.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposed to employ neural geodesic fields to implicitly represent (1) geodesic distance, (2) signed distance filed, (3) shortest geodesic path query using the overfit-paradigm. Experiments are conducted to demonstrate the effectiveness of the proposal. Strengths: • The writing of this paper is clear and easy to follow. • The extension to use implicit neural representation to represent geodesic distance and efficiency query is novel. It is a simple and yet seems effective approach under the overfitting paradigms for geodesic implicit representation. Weaknesses: 1. The diversity of the mesh shape used in the experiments is limited. No realistic real-world mesh is experimented. The original scale of mesh shape in the experiments is not diverse enough: e.g. from small toy scale to large skyscraper scale mesh to validate the proposal in a more realistic setting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Page 4, section 3.2, any ablation study on input points embedding? Any comparison regarding FC layers vs position embeddings for point coordinates? Usually position embedding might work better for avoiding over-smoothing of the input signals for complicated shapes. It might worth exploring. 2. Section 3.3, any ablation studies on the learning objectives? Why L1? How about others? 3. Table 1, please annotate the best and worst performing entries in comparison. 4. Section 4.3, more ablation analysis is needed. For example, more conclusive analysis is needed for Table 3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Rebuttal to Reviewer Racd]** ### **[W1]** *Not enough shape diversity and scale of the used testing meshes.* **Response:** We would like to remind that some of the testing models used in our experiments (such as *armadillo*, *bunny*, *dragon*) are real-world meshes, which are created from real-scans. Please have a check from the website of *The Stanford 3D Scanning Repository*. Still, we do agree with the reviewer that it is valuable to perform evaluations on more diverse mesh models. To this end, here we considered both *1) data volume scale* and *2) scene representation scale*. For the former, we experimented with a much denser version of the *dragon* model with 1.5M vertices and another classic graphics model *lucy* (which is also created from real scans) with 6.9M vertices. As reported in the Table below, our approach works well on these two highly dense mesh models. | Mesh | #V (M) | MRE (%) | Chamfer-$L_1$ ($\times 10^{-2}$) | | :----: | :----: | :----: | :----: | | dragon | 1.5 | 0.70 | 1.335 | | lucy | 6.9 | 0.58 | 1.404 | For the latter, we further performed evaluations on three large-scale real-scanned meshes covering (a) indoor room, (b) urban scene, and (c) street view, which are from diverse dataset sources. Since these realistic scene scans are highly messed up, we turn to use the classic Dijkstra algorithm to compute geodesics as training and testing data. As shown in Figure R2 of the uploaded PDF file, our approach still achieves satisfactory performances (with MRE rates of 0.37\%, 1.14\%, and 0.23\%, respectively) on the three scene-level meshes. ### **[Q1]** *Ablation study on the position encoding of query point coordinates.* **Response:** Thanks very much for your valuable suggestions. Accordingly, as reported in Table R1 of the uploaded PDF file, we conducted the corresponding ablation study by applying a classic position encoding operation (as used in NeRF, ECCV-20') to input query point coordinates before the subsequent FC layers for high-dimensional feature embedding. Indeed, we can observe that the resulting performances on the two complex meshes *dragon* and *heptoroid* are improved in different degrees, demonstrating the potential of exploring more advanced position encoding strategies. ### **[Q2]** *Ablation study on the distance metric used to formulate the learning objectives.* **Response:** Thanks very much for your detailed evaluation of our specific choices of technical implementation. In fact, we did experiment with some different ways of formulating the learning objectives in terms of the choice of distance metric. For example, we can choose to use $L_2$ loss instead of $L_1$ to calculate all learning objectives. Besides, for the formulation of shortest path supervision (i.e., $\ell_\mathrm{spath}$ formulated as Eq. (8) in the paper), we can also choose the popular Chamfer distance to supervise the curve deformation process. The resulting performances obtained from such two different implementation variants are reported in Table R1 of the uploaded PDF file. We can observe that in most cases using $L_2$ loss causes different degrees of performance degradation, and Chamfer distance leads to further performance improvement. Still, in our implementation, we did not adopt Chamfer distance as our final choice because of its additional computational burden and memory cost. ### **[Q3]** *Annotating the best and worst performing entries in Table 1 of the paper.* **Response:** Thanks very much for your useful suggestion to help improve the formatting quality of our manuscript. We will accordingly revise Table 1 in the final version. ### **[Q4]** *Enriching conclusive analyses in Section 4.3 of the paper.* **Response:** Thanks very much for your valuable comments, and we are sorry for not being able to place detailed ablative analyses in the paper due to page limits. According to your suggestion, we will supplement more adequate and in-depth analyses to facilitate readers' understanding of our approach in the supplementary material. --- Rebuttal Comment 1.1: Comment: Thank you for taking time to respond to my review. Please keep improving the paper. I will keep my original ratings for now. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Racd Thanks very much for your time and efforts in evaluating our rebuttal contents. We will supplement these newly-added discussions and experimental results into the final version to further improve the quality of our paper. Best regards, Authors
null
null
null
null
null
null
The geometry of hidden representations of large transformer models
Accept (poster)
Summary: This work investigates the geometry of hidden representations of transformer models trained via a self-supervised task on either amino acid prediction in proteins or pixel prediction in images. The work uses two tools to understand this geometry: intrinsic dimension (ID) (estimated via the TwoNN algorithm) and neighborhood overlap. The paper shows that as data passes through the layers of a transformer, both the ID and neighborhood overlap change in characteristic ways that the work connects with the extent to which the representation is organized by the semantic content in the data. The paper validates this claim by looking at the extent to which each data point shares neighbors that belong to the same semantic class, showing that this peaks at layers when ID is low and changes in neighborhood overlap is also low. The work then speculates that these observations could provide a way of identifying the best representations to use for downstream tasks. Strengths: - While there is a rich literature exploring hidden representations of deep learning models, most works continue to focus on CNNs or MLPs with supervised training on medium sized datasets. Given the growing importance of transformers to NLP and vision and the increasing use of self-supervision for large-scale training, there is a need for works that explore how these approaches impact a model’s hidden representations. Thus, this is a welcome work that will doubtless be of value to researchers training large transformers via self-supervision. - The paper contains careful analysis of the experiments (as opposed to other works which all too frequently just list summary statistics). The conclusions which are reached are all fairly-well supported within the scope of the experiments that were performed. Crucially, the two metrics that are used, intrinsic dimension and neighborhood overlap, reinforce each other’s conclusions. This increases the believability of the results significantly. - The paper is able to suggest some practical value (identifying layers of the transformer that capture the most semantic content) in their scientific observations (patterns in representation geometry in large-transformers), helping to connect practice with theory. Weaknesses: - The experiments in this work all use transformer models trained via self-supervised tasks based around data reconstruction (as opposed, for example, to a contrastive learning task). While most of the phenomena observed in this paper is attributed to the “large transformer” architecture, this reviewer wonders if some of the conclusions are also contingent on the self-supervision task. For example, does one see similar behavior in a large vision transformer that has been trained in a supervised manner. Disentangling which of the conclusions are due to architecture and which are due to training method would make the work significantly stronger. - Many of the claims in the work rely on comparison between the shapes of curves which plot intrinsic dimension of data between layers. While the claims seem mostly reasonable given the figures, it would make the work stronger if more quantitative measures were used to, for example, compare ID curves. This sort of automation might also make it easier to compare against a broader range of models and datasets. While this reviewer thought that the use of both protein and image datasets was a strength of this work, comparing against other types of transformer architectures and datasets would reinforce the conclusions. - The main content of the paper consists of discussion of several figures. This reviewer felt that the way this discussion was written/organized, it was easy for the reader to get disoriented and forget the main points already established. Possibly making the discussion more concise or better highlighting the main takeaways from each section would help the reader to mentally organize the primary findings of the work. Being more succinct might also allow further experiments to be included. Nitpicks - The abstract describes a transformer as being composed of a “sequence of functionally identical transformations”. To this reviewer, functionally identical transformations would be transformations that behave the same way on the level of functions (for the same inputs, they give the same outputs), but may be parametrized in different ways. The layers of a transformer are instead transformations that all belong to the same functional family, though each layer is generally a different function. - Some sentences in this work have an over-abundance of commas. For example, “In this work, we systematically investigate, in some self-supervised models, fundamental geometric properties of representations, such as their intrinsic dimension (ID) and neighbor composition.” To improve the flow of the paper, it would be good to find such sentences and remove some of the commas. - Line 66: “…where the annotation is very scarce” -> “…where the annotations are very scarce.” - Reading this work would have been easier if the figures were closer to their corresponding analysis. As it is, the reader must flip between pages frequently to validate claims. - Line 110: “More in detail,” -> “In more detail,”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Given the success of transformers trained with self-supervised learning in the NLP domain, I was curious why the authors did not study this setting? - Line 76: Is the assumption of locally constant density really “mild”? This reviewer wonders if image datasets, which have moderately high intrinsic dimension ([1] estimates ImageNet as having intrinsic dimension between 26 and 43) might be fairly sparsely sampled even when the dataset is large. - What happens to the results when $k$ is varied (rather than being fixed at 30)? - The paper says that hidden representations are extracted after the first normalization layer of the attention blocks. Do you have a sense of whether results change if you use representations from other parts of attention blocks? - The results show the intrinsic dimension of the data increasing at times between layers. We know that mathematically, maps $f: M \rightarrow \mathbb{R}^n$ that map a $k$-dimensional manifold $M$ into $\mathbb{R}^n$ such that $\dim(f(M)) > k$ exist (e.g., space filling curves) but it is this reviewer's understanding that the space of such $f$ has measure zero with respect to many reasonable probability measures on the space of such maps. Given this, is it really possible that the intrinsic dimension of the representations is increasing? Or is their some other change that is changing the ID estimation? [1] Pope, Phillip, et al. "The intrinsic dimension of images and its impact on learning." arXiv preprint arXiv:2104.08894 (2021). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This reviewer believes that some limitations could have been discussed, including: - The use of intrinsic dimension estimators, which can provide misleading feedback in certain situations. - A limited number of different model types and datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer wEGg for carefully reading our manuscript for providing several points of discussion that we address below. **Weaknesses** *[...] Disentangling which of the conclusions are due to architecture and which are due to training method would make the work significantly stronger.* We agree with the referee on this point. We will stress further in the final version of the manuscript that our analysis is specifically focused on transformer models trained by self-supervised reconstruction tasks (such as MLM or LM). We address the critical role of self-supervision in Appendix A.3.4: we show that self-supervised pre-training is crucial for the emergence of three-phased behavior by comparing our results with vision transformers trained for image classification. In that case, we can observe a much less pronounced second peak and a decrease of ID in the last layers to values lower than the first embedding. *[...] it would make the work stronger if more quantitative measures were used to, for example, compare ID curves.* It is true that we based our analysis of ID curves on the inspection of the figures: since the domain is one-dimensional this intuitive approach is particularly effective in this case. However, since the ID is measured on a small number of layers the local minimizers of the ID can be found easily by a brute-force search approach as well. *[...] comparing against other types of transformer architectures and datasets would reinforce the conclusions.* We kindly refer the reviewer to our response to Question 1. *This reviewer felt that the way this discussion was written/organized, it was easy for the reader to get disoriented and forget the main points already established.* We will take into consideration this advice for the drafting of the final version of the paper. **Questions** *I was curious why the authors did not study this (NLP) setting?* We kindly refer the reviewer to the global response about our preliminary investigation of NLP. *Line 76: Is the assumption of locally constant density really “mild”?* The reviewer is correct in highlighting potential issues in assuming locally constant density for datasets with high ID. To quantitatively validate this assumption, we employed the Point Adaptive kNN (PAk) method introduced by [2]. PAk determines, for each individual data point, the extent of the neighborhood over which the probability density can be considered constant, subject to a specified level of confidence. Applying PAk (as implemented in [3]) to our dataset revealed that, on average, the density can be considered constant within the first 6 neighboring data points. In our study, the ID is measured using the distances between a data point and its first two nearest neighbors. This analysis allows concluding that, at this scale the assumption of local density holds. *What happens to the results when k is varied [...]?* We addressed the robustness of our findings concerning the hyperparameter k in Appendix A.3.4 (see Fig 6). Fig. 6 shows that the neighborhood overlap curves remain essentially unchanged for k<50 both between successive layers (Left) and with the ground truth labeling of the data (Right). *Do you have a sense of whether results change if you use representations from other parts of attention blocks?* In Fig.2 of the attached PDF we extended the analysis of the ID and the overlap with the ground truth labels ($\chi^{l, gt}$) on the representations after the first normalization layer and after the attention maps of each self-attention block for the iGPT model on ImageNet. The ID and $\chi^{l, gt}$ profiles are consistent with those shown in the manuscript which are extracted before each attention block, indicating the robustness of our analysis with respect to the layer choice. Due to time constraints in preparing this response, we perform the analysis only for iGPT-large. Nevertheless, we are confident that the observed trends would also hold true for smaller models. *[...] is it really possible that the intrinsic dimension of the representations is increasing? Or is there some other change that is changing the ID estimation?* The reviewer is indeed right in saying that from a mathematical standpoint the set of continuous functions $f: R^{d}\to R^{d}$ mapping a k-dimensional manifold $M\subset R^{d}$ onto a set that fills a manifold $M’$ with $dim(M’)>k$ is of measure zero. However, the manifold hypothesis, which is one of the working assumptions of our analyses, states that datasets (and their representations) typically lie “in proximity” of a smooth manifold often of much lower dimension than the embedding space. In particular, the ID approximates the number of independent coordinates that are necessary to describe the dataset without significant information loss [4]. When dealing with a finite set of data points the ID typically represents the number of dimensions where the dataset displays large/significant variations, neglecting those directions where the data variability is small/non-significant. Thus, the space of continuous (even smooth) functions that increase the intrinsic dimension in the sense of the manifold hypothesis is not of measure zero. **Limitations** *The use of intrinsic dimension estimators, which can provide misleading feedback in certain situations.* We will use the response to the reviewer’s concern about the local density assumption discussed above in the revised version of the manuscript. **References** [1] Touvron et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models”, arXiv:2307.09288, 2023 [2] Rodriguez et al., "Computing the free energy without collective variables, Journal of chemical theory and computation", 2018 [3] Glielmo et al., "DADApy: Distance-based analysis of data-manifolds in Python, Patterns", 2022 [4] Facco et al., "Estimating the intrinsic dimension of datasets by a minimal neighborhood information, Scientific Reports", 2017 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed response. It is especially interesting to see the initial results on language models and that the choice of $k$ does not strongly impact the findings. I have updated my rating to a 7. In general I think this is an interesting work and enjoyed the opportunity to read it.
Summary: This paper presents an analysis of the internal representations of transformers trained with self-supervised learning, such as masked language modeling, from two perspectives: intrinsic dimension and adjacency structure. Experiments on two datasets - protein sequences and image data - revealed that the intrinsic dimension within the transformer has two peaks, one in the early layers and another in the latter layers. This result is qualitatively different from previous convolutional networks trained with supervised learning. Notably, the intermediate layer, where the intrinsic dimension is minimized, is the most suitable for categorical discrimination in downstream tasks. Strengths: This paper presents a notable analysis of the intrinsic dimension of transformer models trained with self-supervised learning. While many approaches have been used to analyze the internal dimension of deep neural networks, most have focused on convolution-based networks. The discovery of two peaks in the internal dimension is intriguing, offering insightful contributions to the community. The results are clearly presented, and the paper is well-structured, making it easy for readers to follow the logical development. Of particular interest is the finding that such representations can spontaneously appear in models trained with masked modeling, even without an explicit bottleneck structure in the model. However, this interesting result also raises many considerations and discussions. There are points in the current paper where sufficient evidence to support the authors' claims is lacking and further room for discussion remains. These points will be discussed in detail below. Weaknesses: The paper's experimental comprehensiveness raises two issues. The first issue revolves around the focus on internal representations of transformer models trained with self-supervised learning, with the assertion that the evolution of the internal dimension displays two peaks. However, it appears that sufficient experiments have not been performed to identify the causes of this occurrence definitively. An ablation study would be incredibly beneficial in distinguishing the elements that stem from the model architecture (transformer) and those arising from the training protocol (masked/autoregressive modeling). Secondly, the paper state in the introduction that the adjacency structure is “rearranged” at the peak of the representation’s internal dimension, regardless of the dataset domain. Yet, upon the qualitative comparison of Figures 1 and 2, it is challenging to confirm a consistent change in the internal dimension and the overlap of the adjacency structure between layers in the image domain experiments. Therefore, the current portrayal of the contributions gives the impression of over-claiming. As asserting a clear correlation in qualitative behavior is difficult, a comparison with some control conditions should be made. Additionally, it would be desirable to have a discussion concerning the influence of differences in the data domain. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I have several questions for the authors: 1. In interpreting Figure 1, the main text mentions that the changes in the internal dimension qualitatively match the experiments for the two domains. However, there seems to be a qualitative difference between domains concerning whether there is concentration or expansion near the final layers in the latter half. What could be the reasons for such a difference? 2. Regarding the analysis of the overlap of the adjacency structure, the paper mentioned that the authors used $k=10$ and $k=1$ in the protein domain experiments, and $k=30$ in the image domain experiments. Generally, the experimental results would depend on this hyperparameter. How robust are the authors' claims with respect to these hyperparameters, and what is the rationale or justification for the chosen hyperparameter values? 3. This paper mentioned that the authors used the GAP type averaging procedure along the token direction when analyzing the internal representation of the transformer model. While this is not a problem when comparing results within the same model architecture, it seems caution is needed when comparing the results with other models. For instance, when comparing the results with convolution-type neural networks, how can we guarantee the qualitative robustness of the statements depending on whether this averaging operation is performed or not? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Given that this paper is not proposing a new method but focusing on analyzing the internal representation of existing methods, it may not strictly apply to a discussion on limitations. The estimation of the internal representation is based on a method proposed in previous research, and the applicability of this method has been sufficiently demonstrated. However, as mentioned in the Weakness and Question sections, there are concerns about the comprehensiveness of the experiments in this paper. Improvements in these points would lend greater significance to the authors' claims. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer kxbE for carefully reviewing our manuscript and for the constructive comments. **Weaknesses** *An ablation study would be incredibly beneficial in distinguishing the elements that stem from the model architecture (transformer) and those arising from the training protocol (masked/autoregressive modeling).* A specific ablation study is described in Appendix A.3.4, Fig. 7, where we examined the vision transformers (ViT) trained for an image classification task. In this case, the ID profile exhibits a much less prominent second peak followed by a gradual decrease of ID below initial values, consistent with the established pattern typically observed in image classifiers [1]. This example shows that the shape of the ID profiles is influenced by the training protocol and is not an inherent characteristic of the transformer architecture itself. We are investigating how-fine tuning affects the models we analyzed in the manuscript. However, it is crucial to emphasize that the primary focus of this work is on studying ID profiles in models trained through self-supervision. More importantly, we aimed to establish connections between the low-dimensional representations and the layers where the semantic properties of the data are better expressed. *[...] upon the qualitative comparison of Figures 1 and 2, it is challenging to confirm a consistent change in the internal dimension and the overlap of the adjacency structure between layers [...] the current portrayal of the contributions gives the impression of over-claiming. [...] Additionally, it would be desirable to have a discussion concerning the influence of differences in the data domain.* We would like to remark that the value of chi reported in Fig. 2 of the manuscript refers to the overlap between consecutive layers. The average value of chi in the first part of the network is of order 0.5 in the case of protein sequences and of order 0.7 in the case of images. This implies that after ~10 layers the overlap with the input is of 0.1% in the case of protein sequences and of ~3% in the case of images. We believe that this can be called a significant rearrangement of the neighborhood, at least in the case of proteins. In order to avoid over-claiming we removed the word “profoundly” before “rearranged” in the introduction and in the text, since in the case of images an overlap of ~ 3% after the peak might indicate the survival of some faint memory of the input. We will also state explicitly that the rearrangement is more significant in the case of protein sequences highlighting the differences in the two data domains in this respect. **Questions** *However, there seems to be a qualitative difference between domains concerning whether there is concentration or expansion near the final layers in the latter half. What could be the reasons for such a difference?* We kindly refer the reviewer to the global response about the qualitative difference of ID profile between iGPT and ESM models. *How robust are the authors' claims with respect to these hyperparameters [k=1, k=30], and what is the rationale or justification for the chosen hyperparameter values?* We chose k=1 for pLMs to show how nearest neighbor search in plateau layers improves identifying protein relations. Other k values (k=10, k=30) do not affect the observations presented in our manuscript. We addressed the robustness of our findings concerning the hyperparameter k in Appendix A.3.4. In particular, Fig. 6 shows that the neighborhood overlap curves remain essentially unchanged for k<50 both between successive layers (Left) and with the ground truth labeling of the data (Right). *This paper mentioned that the authors used the GAP type averaging procedure [...] when comparing the results with convolution-type neural networks, how can we guarantee the qualitative robustness of the statements depending on whether this averaging operation is performed or not?* We emphasize that the global average pooling (GAP) approach was not intended for comparing our results regarding the geometric properties of transformer representations with the characteristics of hidden layers in convolutional architectures. However, to directly address the concern raised by the reviewer regarding the robustness of the GAP procedure in convolutional neural networks (CNNs), we conducted a test on the representations of CIFAR10 in the Wide-ResNet28-8 model [2]. We describe the results in Fig. 4 of the attached PDF. The left panel shows the intrinsic dimensionality (ID) computed in the full feature space, while the right panel displays the ID after applying GAP along the spatial dimensions (width and height). Notably, we observe a qualitative consistency in the shape of the ID profiles in both cases, as they conform to the typical bell-shaped curve characteristic of CNN architectures [1]. From a quantitative perspective, the ID profile after GAP exhibits a downward shift as a consequence of the averaging procedure, particularly pronounced at the initial stages of the architecture. This effect is likely due to the low number of channels after the first block (16) compared to the later blocks (128, 256, 512 respectively). Nevertheless, we can be cautiously confident about the qualitative robustness of the profiles as long as the number of “channels” (where channels are interpreted as embedding dimension) significantly surpasses the ID as in large transformer models. Indeed, in this case the number of "channels" is substantially higher than in early CNN layers (ranging from 512 to several thousands in modern large language models), and remains constant throughout the architecture. [1] Ansuini et al., “Intrinsic dimension of data representations in deep neural networks”, 32nd Conference on Neural Information Processing Systems, 2018 [2] S. Zagoruyko and N. Komodakis, “Wide Residual Networks”, Proceedings of the British Machine Vision Conference 2016 --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your feedback on my questions and concerns. The feedback from the authors' addressed all of my questions: qualitative difference in the domain, robustness on hyperparameters, use of GAPs. Based on their responses, I would like to increase the score to 6.
Summary: This work focuses on characterizing the geometrical and statistical properties of data representations across the hidden layers of large transformer models. Specifically, it demonstrates the similarity in the evolution of geometric properties, such as the intrinsic dimension, between image-based and protein-based transformers. Additionally, the work proposed an intuitive, unsupervised strategy to identify the most semantically rich representations of transformers. The effectiveness of this strategy is demonstrated by the SOTA performance achieved by leveraging previous work and replacing the last layer with the layer identified in this study. Strengths: - This work presents a novel strategy to select the layers that produce the most semantic representation to be used in downstream tasks, opening many possible applications. Moreover, the method seems to be general and could be exploited in other architectures where the layer choice is currently arbitrary, such as classifiers. - This work demonstrates state-of-the-art results in identifying protein relations by leveraging existing methods. It simply involves swapping the previously employed last layer with the layers that maximizes the semantic content identified through the strategy proposed in this work (Figure 5, right). - The writing is extremely clear and easy to follow, presenting intuitive ideas, a solid experimental setup and convincing results. - This work does not involve training models but instead analyzes publicly available models, completely avoiding any possible bias that could have been introduced in the training process. - Among the others, the insight that large transformers behave essentially like sophisticated autoencoders (lines 337-338) is insightful and may serve as a source of ideas for future research. Weaknesses: This work presents a novel idea with exceptional clarity in its writing, accompanied by a robust experimental setup that yields convincing results. While no major weaknesses were identified, a more comprehensive discussion on the current limitations of the study would enhance its overall quality even further. Minor: - In line 36 (and others), the paper mentions the term "semantically rich representation." However, it is important to clarify how this term is defined. It can be argued that the richness of semantic content in a representation is dependent on the specific downstream task being addressed. In other words, a representation can be considered more or less semantically rich based on its effectiveness in a particular downstream task. - To enhance readability, it would be beneficial to include the ID (or at least the minimum ID) directly in the plot in Figure 4. This additional information (even though repeated from Figure 3), would facilitate the reader to compare the peak categorical information with the ID minimum. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In lines 131-132, the paper mentions a method to compress the resolution and color channels of images to address memory constraints. However, it raises the question of why a methodology similar to the one described in [1] was not adopted. In [1], a "first-stage" autoencoder was trained specifically to compress the images into lower-dimensional latent representations while maintaining perceptual equivalence (and kept frozen in subsequent stages). Is there a specific reason why this approach was not considered in the current work? - In line 286, it is reported that the plateau extends beyond half of the layers. This raises the question of whether this observation could be an indication that the model is over-parameterized or, at the very least, has more layers than necessary. Further investigation or discussion could shed light on this question and provide valuable insights in the minimum network complexity to produce representations with the same semantic information. [1] Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. “High-Resolution Image Synthesis with Latent Diffusion Models.”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Although not extensively, Section 4 explores the limitations and potential areas for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to reviewer 4o8z for their appreciation of our work, for carefully reviewing our manuscript, and for the stimulating comments. **Weaknesses** *In line 36 (and others), the paper mentions the term "semantically rich representation." However, it is important to clarify how this term is defined. It can be argued that the richness of semantic content in a representation is dependent on the specific downstream task being addressed. In other words, a representation can be considered more or less semantically rich based on its effectiveness in a particular downstream task.* We agree with this observation, and we will clarify that the expression "semantically rich representation" is relative to a specific task (typically of classification). *To enhance readability, it would be beneficial to include the ID (or at least the minimum ID) directly in the plot in Figure 4.* We thank the reviewer for this suggestion. We will consider adding an explicit indication of the minimum ID in the plot of Fig. 4 in the revised version of the manuscript. **Questions** *In lines 131-132, the paper mentions a method to compress the resolution and color channels of images to address memory constraints. However, it raises the question of why a methodology similar to the one described in [1] was not adopted.* The resolution and color channel compression mentioned in Line 131 of the Manuscript are part of the original encoding procedure of the iGPT models developed by [2], and thus they are not our decision. Our work is just focused on the analyses of transformer models developed by other groups and trained by self-supervision. In the discussion section, we will highlight that an examination of the approach by [1] is left for future research. *In line 286, it is reported that the plateau extends beyond half of the layers. This raises the question of whether this observation could be an indication that the model is over-parameterized or, at the very least, has more layers than necessary.* This is a very interesting and pertinent point. Indeed, we recently started exploring the possibility of eliminating or replacing by a suitable transformation certain layers in this part of the network without compromising the performance. We will highlight this as a potential avenue for future research in the discussion section. **References** [1] Rombach et al., “High-Resolution Image Synthesis with Latent Diffusion Models”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 [2] Chen et al., “Generative Pretraining From Pixels”, 37th International Conference on Machine Learning, 2020 --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! It addresses the comments and questions raised. I do not have any further questions, and I confirm my rating.
Summary: The paper studies the hidden representation of pretrained transformers via the ID (intrinsic dimension) on protein language tasks and image reconstruction tasks . The papers show that on protein LM, from input to output layer, the ID first increase to a peak, then decrease to an elbow, and finally increase to near its ID value at input layer; this is robust from small to largest models. For image reconstruct, the picture is quite different; the input & output layers have the smallest ID and there two peaks near the input and output layers, respectively. The results on iGPT is less robust compare to protein LM; (e.g., small iGPT does not seem to have two peaks). Overall, I think the paper is well written, presentations and experimental setup are clean, the results are interesting. However, I am not very familiar with the field and could not judge the novelty of the paper. Strengths: A well written paper; structure is clean; presentation is good; authors spent decent amount of efforts to introduce / motivate / explain key concepts using in the paper, e.g. ID etc. The results seem interesting even to folks who are not working this area. Weaknesses: The results on iGPT is less robust (compare to protein LM). In particular, for the small model (I don’t see two peaks). Naively, I also expect the shape of ID of iGPT to be similar to pLM, encode (decreasing ID, smaller than input) and decode (increasing ID), just like the elbow in the pLM. Is there an explanation? Why not also including a third task: NLP (casual language model) in the main text? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Have several discussion about further extension of the current approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer K92u for reviewing our manuscript. **Weaknesses/Questions** *The results on iGPT is less robust (compare to protein LM). In particular, for the small model (I don’t see two peaks).* We agree that in the small model the second peak is absent. However, the small model is significantly less accurate than the larger model, both in terms of validation loss (refer to Fig. 3, [1]) and performance in image classification tasks, as evaluated through linear probes (see Fig. 3, [1]) and neighborhood overlap (Fig. 4, right panel, in our work). We emphasize that also in the case of protein language models (pLMs), the smallest model (ESM2-8M, see attached PDF Fig. 1) does not exhibit a clear three-phased ID profile (no peak and plateau), and the three-phased behavior becomes gradually more pronounced when increasing the size of the models, similarly to the case of iGPT models. Indeed large transformer models can develop qualitative changes in their performance and internal workings as their size is scaled up, so that a feature that is absent in smaller models can suddenly appear as the number of parameters is increased [2]. *Naively, I also expect the shape of ID of iGPT to be similar to pLM, encode (decreasing ID, smaller than input) and decode (increasing ID), just like the elbow in the pLM. Is there an explanation?* We kindly refer the reviewer to the global response about the difference of ID profile between iGPT and ESM models. *Why not also including a third task: NLP (casual language model) in the main text?* We kindly refer the reviewer to the global response about our preliminary investigation of NLP. **References** [1] Chen et al., “Generative Pretraining From Pixels”, 37th International Conference on Machine Learning, 2020 [2] Wei et al. “Emergent Abilities of Large Language Models”, Transactions on Machine Learning Research, 2022 --- Rebuttal Comment 1.1: Title: Re Comment: Thanks for the detailed response, in particular, for running new experiments on the 70B llama-2 model. I will keep my score.
Rebuttal 1: Rebuttal: We thank all reviewers for the detailed and diligent reading of our paper, from which we took a lot of cues on how to improve the quality of our work. We would also like to express our gratitude for the general appreciation of our contribution. We reply to common points raised by some Reviewers here below. **We remind everyone that the PDF file containing the figures supporting our responses is attached to this post.** *Reply to the concern raised by reviewers **K92u** and **kxbE** regarding the difference in the final part of the ID profiles between iGPT and ESM.* We agree with the reviewers that there is a qualitative difference in the last part of the ID profiles. In the main text, we mention that the “ three phases can be recognized in all the transformers and a fourth phase which we observed only in the iGPT models” (lines 172-173). We also emphasize that “In the last part of the network [iGPT], at variance with what is observed in pLMs, the ID does not remain constant but grows again (more moderately) forming a second shallow peak almost at the end of the network” (lines 219-220-221). Indeed, the ID in the middle part of the architecture need not be smaller than that at the input. As we state in the manuscript, the ID in the middle layers of the architecture is connected to the semantic complexity of the dataset, which is different in the two datasets. More precisely, in the context of protein sequences, Facco et al. (2019) show that the ID of a family of proteins, which measures the amount of variability arising from evolutionary changes in protein sequences, is between 6 and 12. Conversely, when dealing with images, the ID related to the semantic complexity of the dataset could be roughly measured from the values at the output of the state-of-the-art classifiers, where the representation is most compact. For instance, in the case of the ImageNet dataset, the ID at the output of various ResNet CNN is within the range of 15-20 (Ansuini et al., 2019). These values are remarkably consistent with those observed in our study: 5-6 for ESM-2, and 22 for iGPT on ImageNet (see Fig. 1 in the manuscript).\ On the contrary, the ID at the first layer of the network is influenced by various aspects of the raw input data such as the brightness of the input pixels (Ansuini et al., 2019) or other factors, and can be either larger or smaller than the ID measured at middle layers of the network. Since in the output layer the network tries to reconstruct (part of) the input data, these considerations can explain why there can be concentration (iGPT) or expansion (ESM) in the final layers of the network. However, the *key finding of our study remains consistent in both cases*: the layers where the ID reaches a local minimum correspond to representations in which the overlap with a semantic property of the data is most pronounced. *Reply to the question of reviewers **K92u** and **wEGg** regarding the potential extension of the current work to the NLP domain.* At the time of submission, the transformers we analyzed, trained on Natural Language Processing (NLP) tasks, were not large enough to observe a second peak akin to the one in the manuscript for iGPT. In the Appendix, we reported the estimate of the intrinsic dimension (ID) of a GPT-2-XL model on the Stanford Sentiment Treebank (SST) dataset (see Fig. 8 in the Appendix), and commented on our findings.\ Recently, we conducted the same experiment on the much larger Llama-v2 model (70B parameters) that was released last July (Touvron et al., 2023). In this latter case, described in Fig. 3 of the attached PDF, our preliminary analysis shows that the evolution of the ID profile is more complex: after the first peak, the ID increases again in the middle of the architecture. Remarkably, despite the dataset being quite different from ImageNet and Uniref, the *key finding of our present work remains consistent in the NLP case as well*: in correspondence with the first local minimum at one-third of the architecture depth (0.3 relative depth) the overlap with the class partition (given by sentiment of the sentence) is the highest. \ After the second peak, the ID profile shows a more complex behavior. We believe that a thorough analysis of this and other downstream tasks is necessary to fully decipher the meaning of such profiles. We are excited to clarify and extend these aspects in a follow-up of this work, dedicated to NLP tasks. **References** Facco et al., “The intrinsic dimension of protein sequence evolution”, PLOS Comp. Bio., 2019 Ansuini et al., “Intrinsic dimension of data representations in deep neural networks”, 32nd Conference on Neural Information Processing Systems, 2018 Touvron et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models”, arXiv:2307.09288, 2023 Pdf: /pdf/add4d88d2bec5a3a518cb3212b4ddf1923ebd046.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Conditional Score Guidance for Text-Driven Image-to-Image Translation
Accept (poster)
Summary: This paper introduces a novel approach for text-driven image-to-image translation tasks. The main contribution of this work is the development of a conditional score function that takes into account both the source image and text, in addition to the standard condition with the target text. The new score function consists of two terms: the standard score function conditioned on the target prompt, and the guiding score function, which models the posterior of the source latent given the target latent and target prompt. The paper also proposes an effective mixup strategy in cross-attention layers of the text-to-image diffusion model to facilitate image-to-image translation. Experimental results that demonstrate the outstanding performance of the proposed method on various tasks. The paper also introduces an intuitive performance evaluation metric that measures the fidelity of pairwise relations between images before and after translation. Strengths: 1: Detailed theoretical explanation for the algorithm used and the overall setup, providing a strong foundation for the study. 2: The paper offers a novel mixup method that enhances the conditional score guidance, and the mixup method effectively combines two outputs of cross-attention layers. 3: Comprehensive experimentation and qualitative/quantitative measures back up the claims. Comparison with state-of-the-art methods and showcased superiority in most cases. 4: The paper provides a novel metric (Relational Distance) for evaluating the methodology. The new metric quantifies how faithfully the relational information between source images is preserved between translated target image, which is a good contribution. Weaknesses: 1: Lack of clarity on the practicality of the proposed method. It would be very valuable to discuss the computational cost (time, memory) and efficiency of the proposed method compared to other techniques. 2: The paper only focuses on one pre-trained model (Stable Diffusion) for the experiments. More experiments regarding other pre-trained methods should be shown to show the proposed method generalize well. 3: Insufficient coverage of the limitations of the method. There is no extensive discussion on the limitations of their method and what scenarios it won't work well in Technical Quality: 3 good Clarity: 3 good Questions for Authors: The method proposed appears to be heavily reliant on the quality of the source image. Could there be issues if the source image isn't high quality, for instance, leading to degrading the results? Please see the weaknesses for more. I will change my rating based on the rebuttal and other reviewers' comments. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed. One potential limitation is listed in the questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your constructive and positive comments and below are our responses to the main questions. Q1. Computational cost in terms of time and memory In order to observe the realistic speed of each algorithm, we measure the wall clock time using a NVIDIA A100 GPU with a single image while checking the memory consumption. Although the naive DDIM translation algorithm retains the fastest inference time as presented in Table A8, it achieves poor generation results as mentioned in the main paper and supplementary material. For the theoretical inference comparison, note that CSG approximately requires an extra 0.5x inference cost of DDIM since CSG needs an extra computation of reversing the latent using the source prompt embedding different from DDIM. However, in case of CSG and Pix2Pix-Zero, there is a disparity between theoretical and practical inference costs since the communication cost for copying cross-attention maps from the GPU memory to the CPU memory is not negligible. In CSG, in order to save the GPU memory, our algorithm applies the resizing operation in CPU of the cross-attention maps for computing the smooth content mask, which further makes the inference speed become slower. Therefore, the practical speed can be reduced if we have enough GPU memory. Table A8: Computational cost of the proposed method compared to DDIM and Pix2Pix-Zero. | | DDIM | Pix2Pix-Zero | CSG w/o mixup | CSG | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | time/image (s) | 5.129 | 28.647 | 19.791 | 25.736 | | GPU Memory (GB) | 6.840 | 11.546 | 10.030 | 10.040 | Q2. Other pre-trained text-to-image diffusion models Our framework generalizes well to the diffusion models. To show that the proposed method generalizes well, we visualize generated images given by CSG using LDM in Figure G of the rebuttal document (Rebuttal_CSG.pdf). The figure demonstrates that our conditional guidance method can generalize well. Note that LDM is similar to Stable Diffusion, however the training data of LDM and its resolution are different from that of Stable Diffusion. In case of other pretrained text-to-image diffusion models such as DALLE-2 [E1] and Imagen [E2], they are not publicly available, so we can not test CSG using the pre-trained text-to-image diffusion models. Q3. Limitations Our method can fail to edit images with complex prompts due to the incompetence of pre-trained text-to-image diffusion models. As other text-driven image-to-image translation methods, the proposed method also has another limitation that it can not be applied to complex tasks such as enlarging the object or moving the object, where it would be an interesting work to solve the difficult tasks. For the potential negative social impact, our method can generate harmful or misleading contents due to the pre-trained model. We will add the limitations and negative social impacts in the final version. Q4. Translation results of degraded source images We visualize qualitative results using degraded source images in terms of bounding box removal in the object, lighting changes, and noise addition in Figure F in the rebuttal document (Rebuttal_CSG.pdf). Although the quality depends on the quality of the source image as presented in the figure, we can enhance the visual quality. For example, in case of the removal in the object, we can address the problem by applying existing image inpainting techniques to the source images and then translate the modified source images using CSG. We appreciate a good suggestion, and we will discuss the limitation in the final version. Reference [E1] A. Ramesh et al., Hierarchical Text-Conditional Image Generation with CLIP Latents , arXiv 2022. [E2] C. Saharia et al., Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, NeurIPS 2022. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you for crafting the rebuttal! W1: The efficiency of the proposed method is commendable. It's encouraging to see. W2: It would be intriguing to explore the applicability of the proposed method to diffusion models without relying on an auto-encoder for latent generation. Specifically, investigating denoising within the pixel-space. I recognize that this lies beyond the scope of the current approach, so it is entirely fine. W3: The discussion regarding limitations is well-articulated. W4: Your inclusion of visualizations is greatly appreciated. A valuable addition indeed. Overall, I find that my concerns have been thoughtfully addressed. I am inclined to maintain my rating as a weak accept. --- Reply to Comment 1.1.1: Title: Thanks for the comment Comment: We greatly appreciate your positive comments about the visualization about degraded images and the efficiency of the proposed method. In case of the pre-trained text-to-image diffusion models modeling the pixel space, we hope that such models are publicly available so that we can test CSG using the diffusion models. We believe that our method can be incorporated into such models, and it would be intriguing to work towards this direction. Overall, we will add limitations and negative social impacts, and revise the main paper to reflect your comments. Once again, we sincerely thank you for your time and efforts to review our paper. Best wishes, Authors
Summary: This paper propose Conditional Score Guidance (CSG), where the goal is text-driven image-to-image translation by preserving the original context of a source image. They propose two novel components to achieve this: first, conditional score guidance that computes the score based on the combination of text-conditional score and posterior given from the source latent given the target latent. Here, the posterior is estimated by Gaussian distribution modeling. Second, the propose cross-attention mixup, which enhance the quality of image translation. The proposed method demonstrates high-quality compared to various baselines. Strengths: This paper propose Conditional Score Guidance (CSG), where the goal is text-driven image-to-image translation by preserving the original context of a source image. They propose two novel components to achieve this: first, conditional score guidance that computes the score based on the combination of text-conditional score and posterior given from the source latent given the target latent. Here, the posterior is estimated by Gaussian distribution modeling. Second, the propose cross-attention mixup, which enhance the quality of image translation. The proposed method demonstrates high-quality compared to various baselines. Weaknesses: Incremental Novelty: Although the paper proposes conditional scores that replace the original scores of a pre-trained text-to-image diffusion model for image-to-image translation, the core methodology appears to be significantly built upon pix2pix-zero. To elevate the novelty, it's recommended that the authors shed more light on how their method extends beyond pix2pix-zero's capabilities. This can be achieved by emphasizing the unique aspects of the approach and providing a clearer delineation of the differences from pix2pix-zero. In addition, regarding performance, the qualitative results provided in the Appendix do not consistently demonstrate superior performance compared to pix2pix-zero. Thus, to substantiate the claimed superiority of the proposed method, it would be beneficial to include additional examples, including those not addressed by pix2pix-zero, to showcase the broader effectiveness of the paper. Limited Editing Capabilities: The proposed method seems to primarily focus on object-centric editing. The paper could benefit from demonstrating its capabilities for more precise or finer edits to articulate the efficacy of the proposed components, particularly their role in selectively editing the region of interest while retaining irrelevant parts of the image. Lack of Comparison: The paper should discuss with concurrent work, Delta Denoising Score [1] which was introduced as same purpose as CSG to edit images with minimal modifications. Furthermore, a performance and methodological comparison with Plug-and-Play [2] is highly recommended to provide a more comprehensive understanding of image-to-image translation. Missing parts: The authors have not provided any discussion on the limitations and potential negative societal impacts of their proposed model, a crucial aspect that is currently missing. [1] Hertz et al., Delta Denoising Score [2] Tumanyan et al., Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Since the paper considers image-editing, the author should demonstrate the case when the proposed method fails, e.g., due to the incompetence of pretrained text-to-image diffusion models, the complex prompt might fail, or some attributes that are hard to edit. Also, some ethical warning must be included in the paper to demonstrate the possible misusage of image editing methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your constructive and positive comments and below are our responses to the main questions. Q1. Incremental novelty Our algorithm is somewhat related to Pix2Pix-Zero in the sense that CSG and Pix2Pix-Zero employ the cross-attention layers in the noise prediction network for text-driven image-to-image translation tasks. However, the two methods are completely different, where CSG uses mixup strategy in cross-attention layers while Pix2Pix-Zero takes a gradient step to reduce the distance between the cross-attention maps given by the reverse process of $\mathrm{x}^{\text{src}}_t$ and $\mathrm{x}^{\text{tgt}}_t$, which requires an additional backpropagation step and leads to slower inference. Moreover, as mentioned by Reviewer fXLA, Reviewer nbbt, and Reviewer rP2j, we propose a principled technique for text-driven image translation based on the conditional score function with reasonable motivations. Such ideas have not been addressed before and are sound both theoretically and empirically, so we believe that our paper has sufficient novelty and contribution. Q2. Additional comparison with Pix2Pix-Zero Different from CSG, it is hard to apply Pix2Pix-Zero for dissimilar object-centric tasks such as hose-to-eiffel tower as presented in the right examples of Figure B in the rebuttal document (Rebuttal_CSG.pdf) since Pix2Pix-Zero enforces the shape matching through the cross-attention layers. Also, Figure A-1, A-2, and B visualize additional qualitative examples using CSG and Pix2Pix-Zero, which demonstrate that CSG outperforms Pix2Pix-Zero. Q3. Limited editing capabilities We tried to follow the experiment protocol of prior works [Prompt-to-Prompt, Pix2Pix-Zero] by evaluating CSG on object-centric editing tasks. In addition to the object-centric editing tasks, our method can be extended to global editing tasks. To validate the property, we test CSG and Pix2Pix-Zero using global editing tasks, street-to-snowy street and drawing-to-realistic photo tasks. As presented in Table A6, CSG outperforms Pix2Pix-Zero in terms of SD, LPIPS, and RD even with faster translation although Pix2Pix-Zero achieves slightly higher values of CS. In addition to the quantitative results, Figure A-1 and A-2 in the rebuttal document demonstrate that the proposed method achieves better performance on the tasks. For the global editing tasks, note that we replace BG-LPIPS with LPIPS, where LPIPS measures the perceptual similarity using the entire source and target image, which is more suitable for the global editing tasks. We appreciate you for a good suggestion, and we will add the results in the final version. Table A6: Quantitative results to compare with Pix2Pix-Zero using the pre-trained Stable Diffusion and its synthetic images for Street → Snowy Street and Drawing → Realistic photo tasks. The black bold-faced number represents the best performance for each task in each metric. | Street → Snowy Street | CS (↑) | SD (↓) | LPIPS (↓) | RD (↓) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Pix2Pix-Zero | **0.3215** | 0.0186 | 0.2345 | 0.1436 | | CSG | 0.3125 | **0.0166** | **0.2077** | **0.1340** | | **Drawing → Realistic photo** | **CS (↑)** | **SD (↓)** | **LPIPS (↓)** | **RD (↓)** | | Pix2Pix-Zero | **0.2997** | 0.0414 | 0.2783 | 0.3933 | | CSG | 0.2966 | **0.0263** | **0.0722** | **0.2190** | Q4. Comparison with Plug-and-Play and Delta Denoising Score (DDS) We present the results of Plug-and-Play in Table A7, where the results of CSG are presented in Table 1 of the main paper. Table 1 and Table A7 imply that the proposed method outperforms Plug-and-Play in most cases. For the dog-to-dog with glasses task, we do not report the BG-LPIPS score, since the background region can be easily preserved for the task. Moreover, we present the qualitative results given by Plug-and-Play in Figure D in the rebuttal document, which contains compatible results with Figure 4 of the main paper. The two figures imply that CSG archives much better qualitative results. Furthermore, considering the qualitative results in Figure C and D, the proposed method even without using the cross-attention mixup outperforms Plug-and-Play. In case of DDS, it is very difficult to perform the suggested experiment during the rebuttal period since the source code of DDS is not publicly available. Note that DDS is still in arXiv only and released only one month before the deadline. We thank you for a comment and we will add the results in the final version. Table A7: Quantitative results of Plug-and-Play using the pre-trained Stable Diffusion and real images sampled from the LAION 5B for various tasks. The black bold-faced number represents better performance compared with the results of CSG presented in Table 1. | | CS (↑) | SD (↓) | BG-LPIPS (↓) | RD (↓) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Cat → Dog | 0.2787 | **0.0107** | **0.1839** | 0.1197 | | Dog → Cat | 0.2734 | **0.0102** | 0.2081 | 0.0995 | | Wolf → Lion | 0.2776 | 0.0260 | 0.2647 | 0.1410 | | Zebra → Horse | 0.2823 | 0.0314 | 0.3223 | 0.5739 | | Dog → Dog w/ glasses | 0.2887 | **0.0067** | - | 0.0923 | Q5. Limitations and potential negative societal impacts As you mentioned, our method can fail to edit images with complex prompts due to the incompetence of pre-trained text-to-image diffusion models. As other text-driven image-to-image translation methods, the proposed method also has another limitation that it can not be applied to complex tasks such as enlarging the object or moving the object, where it would be an interesting work to solve the difficult tasks. For the potential negative social impact, our method can generate harmful or misleading contents due to the pre-trained model. We will add the limitations and negative social impacts in the final version. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Dear Reviewer ZGMV, Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any questions or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors
Summary: The authors are proposing two sampling techniques in Diffusion Models; Cross Attention Mixup and conditional score guidance.Experiments show that the proposed methods show decent performance compared to baselines. Strengths: - Qualitative results are interesting - Reasonable motivations and methods Weaknesses: Weaknesses are written in the Question part below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * (Important) How can we get Eq. 8 from Eq. 7? I understand the second term ($p(x_t^{src}|…)$) in integral ends up being one. My question here is that how can the integral on the first term ($p(x_t^{tgt}|\hat{x}_t^{src}…)$) be removed (or ignored) to get Eq. 8? The simplest way would be to think of it as a constant w.r.t. $x_t^{src}$. However, it is not the case since the conditional (the first term in Eq. 7) would be equal to $p(x_t^{tgt},x_t^{src}|y^{tgt})p(x_t^{src}|y^{tgt})$ by Bayes rule, and it is not a constant w.r.t. $x_t^{src}$. * (Line 35-36) A bridging paragraph is needed. * (typo, Algorithm 1) a target prompt embedding. * It is not intuitive how the covariance $\Omega^{-1}$ is computed, and what role it plays. Can it be visualized? e.g., use only the posterior guidance when do sampling and directly visualize Eq. 14. * It seems like $\Omega \in \mathbb{R}^{H \times W}$ while diagonal covariance matrix needs to be in $\mathbb{R}^{C \times H \times W}$ (considering the diagonal term only) * What about sampling speed w/wo the additional guidance term? * (Important) * CSG (wo mixup) evaluation: * Classifier free guidance (best weight needs to be searched, e.g., 3-7) v.s. CSG (wo mixup) * Ablation: * DDIM + mixup v.s. DDIM => to show the effect of mixup Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Additional experiments are needed to validate each of proposed method separately. - Additional comparison between the proposed guidance method (without mixup) and exiting guidance method is needed. - Visualization of the covariance of the posterior would be helpful to understand the proposed method. - Additional description on how to get Eq. 8 from Eq. 7. - It is hard to see the evaluation metric as a novelty since it is not menitoned/analyzed specifically. Although there are some limitations, I will increase my rating if my questions can be reasonably answered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your constructive and positive comments and below are our responses to the main questions. Q1. Mathematical expression of Eq. 8 from Eq. 7 As described in line 164 of the main paper, we get Eq. (8) from Eq. (7) by drawing a sample $\hat{\mathrm{x}}^{\text{src}}_t$ from $p(\mathrm{x}^{\text{src}}_t|\mathrm{x}^{\text{src}}, \mathrm{y}^{\text{src}})$, where the technique is also employed in the controllable generation described in Section I of [C1]. Note that, since we employ the deterministic DDIM process, drawing multiple samples does not change anything. Q2. Classifier free guidance (CFG) v.s. CSG w/o mixup We test CFG using five different values of the guidance scale $s$, {3,4,5,6,7}, to compare them with CSG w/o mixup. Due to the space constraints, we only report the best result of CFG for each task in Table A4, where the results of CSG w/o mixup and DDIM or CFG (s=5) are presented in Table 2. The tables imply that CSG w/o mixup always outperforms CFG with the best hyperparameter $s$ ($s$ = 3 for all tasks). In case of the dog-to-dog with glasses task, we do not report the BG-LPIPS score, since the background region can be easily preserved for the task. We appreciate you for a good suggestion, and will add all experimental results of $s$ from 3 to 7 in the final version. Table A4: Quantitative results of CFG with the best classifier guidance scale $s$ from the LAION 5B dataset for various tasks using the pre-trained Stable Diffusion model. | | CS (↑) | SD (↓) | BG-LPIPS (↓) | RD (↓) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Cat → Dog | 0.2938 | 0.0582 | 0.3442 |0.2480 | | Dog → Cat | 0.2894 | 0.0611 | 0.3373 | 0.2991 | | Wolf → Lion | 0.2991 | 0.0611 | 0.3728 | 0.5495 | | Zebra → Horse | 0.2930 | 0.0788 | 0.3923 | 0.8353 | | Dog → Dog w/ glasses | 0.3124 | 0.0497 | - | 0.2453 | Q3. DDIM w/ mixup v.s. DDIM We report the results of DDIM combined with our mixup strategy, denoted by DDIM w/ mixup, in Table A5 to show the effectiveness of cross-attention mixup, where the results of DDIM are presented in Table 2. The tables demonstrate that DDIM w/ mixup always outperforms DDIM except the only one case in the zebra-to-horse task. Moreover, Table 2 indicates that our mixup is also effective when combined with the proposed conditional score guidance. Table A5: Quantitative results of DDIM combined with the proposed mixup strategy from the LAION 5B dataset for various tasks using the pre-trained Stable Diffusion model. The black bold-faced number represents better performance compared with the results of DDIM presented in Table 2. | | CS (↑) | SD (↓) | BG-LPIPS (↓) | RD (↓) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Cat → Dog | **0.2923** | **0.0697** | **0.3746** | **0.4310** | | Dog → Cat | **0.2914** | **0.0717** | **0.3538** | **0.4261** | | Wolf → Lion | **0.2995** | **0.0664** | **0.3875** | **0.7821** | | Zebra → Horse | 0.2986 | **0.0885** | **0.4074** | **0.8563** | | Dog → Dog w/ glasses | **0.3196** | **0.0572** | - | **0.3208** | Q4. Visualization of the covariance Figure E in the rebuttal document (Rebuttal_CSG.pdf) visualizes the precision matrix in Eq. (14). Note that the two values of $\mathrm{x}^{\text{src}}_t$ and $\mathrm{x}^{\text{tgt}}_t$ at the object region become more different when the reverse timestep $t$ is closer to 0. This implies that our estimation $\mathrm{x}^{\text{tgt}}_t$ for the true mean estimation of $p(\mathrm{x}^{\text{src}}_t|\mathrm{x}^{\text{tgt}}_t, \mathrm{y}^{\text{tgt}})$ can be imprecise at the object region when the timestep is close to 0. However, we can ignore the error since the corresponding precision values are set to 0 as visualized in the figure. Also, considering Eq. (16), the role of the precision (or the inverse of the covariance) adaptively encourages $\mathrm{x}^{\text{tgt}}_t$ to become close to $\mathrm{x}^{\text{src}}_t$ depending on the precision values at the regions, which is a well-suited formulation for image-to-image translation tasks. We appreciate you for a good suggestion, and we will add the visualization in the final version. Q5. Comparison between CSG w/o mixup and existing guidance methods To compare CSG w/o mixup with existing guidance methods, please refer to the result of Pix2Pix-Zero and Prompt-to-Prompt presented in Table 1 and the results of CSG w/o mixup in Table 2. As presented in the tables, CSG w/o mixup outperforms the two guidance methods in most cases. Moreover, Figure C in the rebuttal document (Rebuttal_CSG.pdf), which contains compatible results with Figure 4 of the main paper, demonstrates that CSG w/o mixup achieves better qualitative results. Q6. Presentation We will carefully revise the manuscript to reflect your comments for adding a bridging paragraph between line 35 and line 36, and correcting the typo in Algorithm 1. Q7. Dimension of the covariance We replicate the same value of Ω for C times to match the dimension. Q8. Analysis and detailed description of relational distance For the detailed description of RD, we have already presented it in Section A.3 of the supplementary material. Different from RD, other metrics compare at an individual instance level. The instance level comparisons may be insufficient to evaluate existing algorithms, and it is important to consider the entire structures given by the two sets by measuring the relational set information as RD. Therefore, we believe that RD can provide a more comprehensive assessment of the performance. Q9. Additional sampling cost of the conditional guidance Due to the space constraints, please refer to our response to Q2 of Reviewer JnRy. Reference [C1] Y. Song et al., Score-Based Generative Modeling through Stochastic Differential Equations, ICLR 2021. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Dear Reviewer nbbt, Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any questions or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors
Summary: This paper proposes a new score function for text driven image to image translation. The core idea is to estimate score function conditioned on both original image and the target prompt. The score function can be decomposed into two parts, one is from the target prompt and the other one is guiding term for target image generation. In addition, they also use a trick to get better masks for better preserving non-interested regions such as background. Strengths: The idea is sound both theoretically and empirically. Their writing is clear. Weaknesses: 1, why do they choose those 5 tasks to evaluate? I know they are from prior work, but I am a bit concern that they can not demonstrate the generalizability of the method. 2, I hope they could show random samples so that we can see the average performance of their method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: na Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your constructive and positive comments and below are our responses to the main questions. Q1. Concern about the generalizability of CSG In the main paper, our paper focused on testing CSG on local editing tasks such as cat-to-dog, dog-to-cat, wolf-to-lion, zebra-to-horse, and dog-to-dog with glasses. In addition to the local editing tasks, we employ global editing tasks, street-to-snowy street and drawing-to-realistic photo tasks, to compare CSG with Pix2Pix-Zero. As presented in Table A2 and Table A3, the proposed method outperforms Pix2Pix-Zero in terms of SD, LPIPS, and RD even with faster translation although Pix2Pix-Zero achieves slightly higher values of CS. In addition, we visualize generated samples in Figure A-1 and Figure A-2 in the rebuttal document (Rebuttal_CSG.pdf), which demonstrate that the proposed method also achieves better performance on the tasks. For the global editing tasks, note that we replace BG-LPIPS with LPIPS, where LPIPS [B1] measures the perceptual similarity using the entire source and target images, which is more suitable for the global editing tasks. We appreciate you for a good suggestion, and we will add the results in the final version. Table A2: Quantitative results to compare with Pix2Pix-Zero using the pre-trained Stable Diffusion and its synthetic images for Street $\rightarrow$ Snowy Street task. The black bold-faced number represents the best performance in each column. | | CS ($\uparrow$) | SD ($\downarrow$) | LPIPS ($\downarrow$) | RD ($\downarrow$) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Pix2Pix-Zero | **0.3215** | 0.0186 | 0.2345 | 0.1436 | | CSG | 0.3125 | **0.0166** | **0.2077** | **0.1340** | Table A3: Quantitative results to compare with Pix2Pix-Zero using the pre-trained Stable Diffusion and its synthetic images for Drawing $\rightarrow$ Realistic photo task. | | CS ($\uparrow$) | SD ($\downarrow$) | LPIPS ($\downarrow$) | RD ($\downarrow$) | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | Pix2Pix-Zero | **0.2997** | 0.0414 | 0.2783 | 0.3933 | | CSG | 0.2966 | **0.0263** | **0.0722** | **0.2190** | Q2. Qualitative samples We present additional samples in the rebuttal document (Rebuttal_CSG.pdf), and please refer to the figures. Reference [B1] Z. Richard et al., The unreasonable effectiveness of deep features as a perceptual metric, CVPR 2018. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Dear Reviewer fXLA, Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any questions or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their time and efforts in reviewing our main paper. We have attached our visualization results in the document "Rebuttal_CSG.pdf", and please refer to the qualitative results. Pdf: /pdf/afcc7b2c38b12d7ddf8be662d0f08ac4ad041395.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a new method that can perform image-to-image translation through a pretrained text-to-image. They propose a new cross-attention map mixing technique and a new conditional score guidance function to tackle this problem. The method introduced in this paper does not require additional training. The authors have also shown strong qualitative results demonstrating various application scenarios. Strengths: The proposed method does not require model architecture modifications or any training of the pretrained model. The qualitative results also look very promising. The paper is very coherently written. Weaknesses: 1. Equation 3 is the same as the paper “Xuan Su, Jiaming Song, Chenlin Meng, Stefano Ermon. Dual Diffusion Implicit Bridges for Image-to-Image Translation. ICLR 2023”, which the authors did not cite or compare. 2. There are a lot of approximations and conjectures in the theory which are not explained anywhere in the main paper or the appendix. 3. The presentation of the tables is a little bit misleading. It almost seems like authors choose to denote both the best and the second best results in each category just so that their CS scores won’t look too bad. 4. The authors invented a new metric to evaluate the performance. However, there is no detail of this new metric mentioned in the main paper and readers will have to refer to the appendix to understand what this metric is exactly. 5. There is no discussion of the limitations of the method, nor of any potential negative societal impact. 6. The ablation study is very limited and it doesn’t include experiments to study the effects of their newly proposed conditional score guidance. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Related to Weakness (1), can the authors compare their method to “ual Diffusion Implicit Bridges for Image-to-Image Translation”? How long does it take to generate one image? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is no discussion of the limitations of the method, nor of any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank you for your constructive comments and below are our responses to the main questions. Q1. Citation and comparison with DDIB [A1] We omitted the reference for Eq. (3) accidentally since it is a basic equation introduced in the DDIM paper, which was also used in [A1]. We are sorry for the missing citation but we did not claim that the derivation or usage of the equation is our contribution. Note that the naive method in Section 3.1 of the main paper refers to [A1] and used DDIB as our baseline. Also, we have already compared our method with DDIB, which is actually referred to as DDIM in both the main paper and supplementary material. We thank you for the comment and will clarify the reference issue in the final version. Q2. Inference time compared with DDIB or DDIM In order to observe the realistic speed of each algorithm, we measure the wall clock time using a NVIDIA A100 GPU with a single image. Although the naive DDIM translation algorithm or DDIB retains the fastest inference time as presented in Table A1, the framework achieves poor generation results as mentioned in the main paper and supplementary material. For the theoretical inference comparison, note that CSG approximately requires an extra 0.5x inference cost of DDIB since CSG needs an extra computation of reversing the latent using the source prompt embedding different from DDIB. However, in case of CSG and Pix2Pix-Zero, there is a disparity between theoretical and practical inference costs since the communication cost for copying cross-attention maps from the GPU memory to the CPU memory is not negligible. In CSG, in order to save the GPU memory, our algorithm applies the resizing operation in CPU to the cross-attention maps for computing the smooth content mask, which further makes the inference speed become slower. Therefore, the practical speed can be reduced if we have enough GPU memory. Table A1: Computational cost of the proposed method compared to DDIB and Pix2Pix-Zero. | | DDIB | Pix2Pix-Zero | CSG w/o mixup | CSG | |:----------------:|:---------------:|:---------------:|:---------------:|:---------------:| | time/image (s) | 5.129 | 28.647 | 19.791 | 25.736 | Q3. A lot of approximations and conjectures in the theory We used two approximations for CSG, one of which is about the score function approximation mentioned in Eq. (12) of the main paper. Note that all previous methods relying on text-to-image diffusion models including the simple translation algorithm employ the same approximation. Also, we employ the only one additional approximation in Eq. (8) of the main paper, using the deterministic DDIM sample $\hat{\mathrm{x}}^{\text{src}}_t$ drawn from $p(\mathrm{x}^{\text{src}}_t|\mathrm{x}^{\text{src}}, \mathrm{y}^{\text{src}})$, which is also described in line 164 of the main paper. Thanks to the additional approximation for the proposed guidance, CSG outperforms the previous methods. Q4. Misleading presentation of Tables We simply highlight the best and second-best performance in each metric, and kindly ask you whether our response is helpful to clarify you or not. Q5. Details of relational distance We did not mention in the main paper about providing the detailed information regarding RD in the supplementary material although the detail is described in the supplementary material. We will carefully revise the manuscript to reflect your comment. Q6. Discussion of limitations and potential negative societal impact Our method can fail to edit images with complex prompts due to the incompetence of pre-trained text-to-image diffusion models. As other text-driven image-to-image translation methods, the proposed method also has another limitation that it can not be applied to complex tasks such as enlarging the object or moving the object, where it would be an interesting work to solve the difficult tasks. For the potential negative social impact, our method can generate harmful or misleading contents due to the pre-trained model. We will add the limitations and negative social impacts in the final version. Q7. Analysis on the newly proposed conditional score guidance We already provided the ablation study results in Table 2 of the main paper. The table implies that the conditional score guidance without using the proposed mixup denoted by CSG w/o Mixup is helpful to preserve structures of source images compared with the naive translation algorithm. Also, we present the qualitative results in Figure C in the rebuttal document (Rebuttal_CSG.pdf), which contains compatible results with Figure 4 of the main paper. Considering the two figures, our conditional score guidance significantly outperforms DDIM. Reference [A1] X. Su et al., Dual Diffusion Implicit Bridges for Image-to-Image Translation, ICLR 2023. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Dear Reviewer JnRy, Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any questions or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors --- Rebuttal Comment 1.2: Title: Thank you for your response Comment: Thank you for your clarification. My major concerns have been addressed in the rebuttal. I would like to change my rating from 3 to 5. --- Reply to Comment 1.2.1: Title: Thanks for the comment Comment: We appreciate you, and we will revise the main paper to reflect your comments. If you have any questions or additional comments, please do not hesitate to contact us. Best wishes, Authors
null
null
null
null
null
null
Strategic Apple Tasting
Accept (poster)
Summary: The paper studies an online learning problem with incentives. In particular, in the model studied in the paper there is a principal who has to take decisions on a sequence of $T$ (different) agents. Each agent has a context and they can strategically disclose it (truthfully or untruthfully) to the principal in order to induce them to take favorable decisions. The key feature of the model studied in the paper is that the feedback is one-sided, meaning that the principal only observes a feedback when they take a positive decision on the agent. The paper first proposes an algorithm that achieves $\tilde O(\sqrt{T})$ strategic regret (a stronger notion than Stackelberg regret), when the the contexts are selected stochastically. Then, the paper shows how to deal with adversarially-chosen contexts, by providing an algorithm with $\tilde O(T^{(d+1)/(d+2)})$ strategic regret. Strengths: ORIGINALITY The paper studies, for the first time to the best of my knowledge, an online learning problem in which both incentives and one-sided feedback are involved. This is a relevant combination present in may application domains. QUALITY The paper does a good job in addressing several aspects of the online learning problem under study. In particular, the paper studies both the case in which contexts are selected stochastically and the one in which they are chosen adversarially. CLARITY The paper is well written and easy to follow. SIGNIFICANCE I believe that the problem studied in the paper is of interest several people spanning different communities, from online learning to algorithmic game theory. Weaknesses: ORIGINALITY The techniques used in the paper are not groundbreaking, but they are rather adaptations of well-known and widely-used techniques in online learning. QUALITY While the literature revised in the paper is quite extensive as far as work on online learning with incentives are concerned, I think the paper is missing some very related works in the literature at the interface between algorithmic game theory and online learning. In particular, the following lines of research seem strongly related to the paper: - Repeated Stackelberg games: Balcan, Maria-Florina, et al. "Commitment without regrets: Online learning in stackelberg security games." Proceedings of the sixteenth ACM conference on economics and computation. 2015. (see also some subsequent works extending the paper) - Online Bayesian persuasion: Castiglioni, Matteo, et al. "Online bayesian persuasion." Advances in Neural Information Processing Systems 33 (2020): 16188-16198. + Castiglioni, Matteo, et al. "Multi-receiver online bayesian persuasion." International Conference on Machine Learning. PMLR, 2021. (these works extend those on repeated Stackelberg games to the Bayesian persuasion framework) - Online learning in principal-agent problems: Zhu, Banghua, et al. "The Sample Complexity of Online Contract Design." arXiv preprint arXiv:2211.05732 (2022). (a recent preprint that indeed seems very related to the present paper) I would have expected a discussion on these works in the paper. Of course, the papers that I cited above are only some representative of such a line of research, and I am happy to provide more references if the authors need them. CLARITY No weaknesses to report. SIGNIFICANCE No weaknesses to report. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) I believe that the model studied in the paper is closely connected with one introduced in a very recent preprint (Bernasconi, Martino, et al. "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion." arXiv preprint arXiv:2303.01296 (2023).). In that paper, the authors study a model with type reporting that looks very similar to the one in the present paper. Am I right? 2) The regret bound in Theorem 4.1 is very similar to a regret bound in (Zhu, Banghua, et al. "The Sample Complexity of Online Contract Design." arXiv preprint arXiv:2211.05732 (2022).). Can you discuss the differences between the approach in that paper and your approach? 3) Algorithm 3 requires exponential running time in the worst case. Are there any computational hardness results that can be applied to your problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Please find our answers to your comments/questions below. [*While the literature revised in the paper is quite extensive as far as work on online learning with incentives are concerned, I think the paper is missing some very related works in the literature at the interface between algorithmic game theory and online learning. In particular, the following lines of research seem strongly related to the paper…*] While we agree that the settings you mention are related in the sense that all of these settings (including ours) are various instantiations of Stackelberg games, we did not originally include them as we believe that they are not as strongly related to our work as the literature we already cited. However, we are happy to include them in an expanded related work section and we highlight below the major differences and similarities with our paper. Repeated Stackelberg games: In this literature, the principal (leader) commits to a mixed strategy over a finite set of actions, and the agent (follower) best-responds by playing an action from a finite set of best-responses. Note that unlike in our setting, both the principal’s and agent’s payoffs can be represented by matrices. In contrast, in our setting the principal commits to a pure strategy from a continuous set of actions, and the agent best-responds by playing an action from a continuous set. Online Bayesian persuasion: In this literature, the principal (sender) commits to a “signaling policy” (a random mapping from “states of the world” to receiver actions) and the agent (receiver) performs a posterior update on the state based on the principal’s signal, then takes an action from a (usually finite) set. In both this setting and ours, the principal’s action is a policy. However in our setting the policy is a linear decision rule, whereas in the Bayesian persuasion setting, the policy is a set of conditional probabilities which form an “incentive compatible” signaling policy. This difference in the policy space for the principal typically leads to different algorithmic ideas being used in the two settings. Online learning in principal-agent problems: Strategic learning problems are indeed an instance of online learning in principal-agent problems. Since you mention online contract design as a particular example, we will highlight how our work differs from this area. In contract design, the principal commits to a contract (a mapping from “outcomes” to agent payoffs). The agent then takes an action, which affects the outcome. In particular, they take the action which maximizes their expected payoff, subject to some cost of taking the action. The goal of the principal is to design a contract such that their own expected payoff is maximized. While the settings are indeed similar, there are several key differences. First, in online contract design the principal always observes the outcome, whereas in our setting the principal only observes the reward if a positive decision is made. Second, the form of the agent’s best response is different, which leads to different agent behavior and, as a result, different online algorithms for the principal (see below for more details). [*I believe that the model studied in the paper is closely connected with one introduced in a very recent preprint (Bernasconi, Martino, et al. "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion." arXiv preprint arXiv:2303.01296 (2023).). In that paper, the authors study a model with type reporting that looks very similar to the one in the present paper. Am I right?*] In type reporting, the sender (principal) asks each receiver (agent) to select a signaling scheme, and the signaling schemes are designed in a way such that each receiver is incentivized to select the signaling scheme corresponding to their (true) type. In our setting, the agent’s “type” may be thought of as their context. However unlike the online Bayesian persuasion setting, in our setting the agent has an incentive to strategically misreport their type. Another (less important) difference between our setting and this one is that in our setting the agent’s type is from a continuous space, but the type space is discrete in the setting of Bernasconi et al. We will add these connections in a revision. [*The regret bound in Theorem 4.1 is very similar to a regret bound in (Zhu, Banghua, et al. "The Sample Complexity of Online Contract Design." arXiv preprint arXiv:2211.05732 (2022).). Can you discuss the differences between the approach in that paper and your approach?*] The two algorithms are similar, as both are running multi-armed bandit algorithms (in our case EXP3, in their case UCB) over a discretization of a continuous action space. As a result, our novelty in this section is the analysis of the algorithm in the strategic setting we consider. (Note that the analysis of our algorithm differs significantly from theirs; ours relies on bounding the “strategic discretization error”, whereas theirs uses a covering argument to bound discretization error.) [*Algorithm 3 requires exponential running time in the worst case. Are there any computational hardness results that can be applied to your problem?*] We are not aware of any such hardness results. As we mention in the conclusion, an exciting direction for future research is to design polynomial-time algorithms (or prove hardness results) for the adversarially-chosen context setting. --- Rebuttal Comment 1.1: Comment: I would like to thank the Authors for their response, they satisfactorily answered all of my questions. As a result, I am sticking to my (positive) score.
Summary: The paper studies strategic apple tasting settings. This setting involves decision making that assigns decisions to agents who have incentives to strategically modify their input (context), and the decision maker only receives apple tasting (one-sided) feedback, where it only receives feedback for positively assigned decisions. The authors formulate this setting as an online learning problem with apple-tasting feedback, where a principal makes decisions about a sequence of T agents, with the goal of achieving sublinear strategic regret. Under this problem formulation, the authors proposed a learning algorithm (Algorithm 1) based on greedy, shifted linear policy, and parameter updates based on clean contexts. The author shows that the algorithm achieves O(\sqrt{T}) strategic regret when the sequence when the agent contexts are generated stochastically. Then, the authors extendes to the situation where T is small or unknown and proposed another algorithm (Algorithm 2) under this scenario. Finally, the authors relax the assumption and provide an algorithm (Algorithm 3) under adversarially generated agents. These results apply to the more general setting of bandit feedback, under slight modifications to the algorithms. Strengths: - I like that the paper is very well written and easy to follow - The studied one-sided feedback settings with strategic agents are interesting and often occurs in real-world situations. - The problem is clearly formulated. For all proposed algorithms, the authors provided performance guarantees which are clearly explained and supported by proofs. Weaknesses: It is slightly hard for me to make a connection between assumption of the agent's strategic modification to their context (with effort budget \delta mentioned in definition 2.1), and the motivated examples such as hiring and lending. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As in the above section, can the authors shed some light on the strategic modification effort budget? - Line 212: The authors mentioned "This is the type of strategizing we want to incentivize." I didn't understand why the decision maker would like to incentivize this behavior? - Algorithm 1 line 4, "... and data S_t", do you mean "data D_t"? - Can the authors move algorithm 4 (which is part of algorithm 2) into the main text? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I see that the authors listed a few related future directions in the conclusion. No negative societal impact observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Please find our responses to your questions and comments below. [*As in the above section, can the authors shed some light on the strategic modification effort budget?*] Consider a lending setting in which an applicant (strategic agent) wishes to obtain a loan from a bank (principal). Each agent is described by some context (e.g., number of lines of existing credit, credit history, current income, etc.), which may be strategically modified by the agent. Using the (possibly modified) context, the goal of the bank is to decide whether or not to give the applicant a loan. If the loan is given to the applicant, the bank observes some reward (e.g. whether or not the loan was paid back on time, amount of interest accrued, etc.). Otherwise if they reject the applicant, they receive no signal about their decision. Specifically regarding the effort budget, it may be viewed as a hard constraint on the amount an individual is able to strategically modify their original context. In settings such as lending, this budget is analogous to “hard”/”strict” time or monetary constraints that agents may have when modifying their context. For a more concrete example, a loan applicant may only have several hours every day during which he/she may prepare for their loan application (due to other responsibilities). In this case, the amount they can strategize is limited by this time constraint. [*Line 212: The authors mentioned "This is the type of strategizing we want to incentivize." I didn't understand why the decision maker would like to incentivize this behavior?*] By “this is the type of strategizing we want to incentivize”, we just meant that we want these particular agents to strategize so that we can assign the positive action to them (since this is what maximizes the principal’s utility), as we shifted the decision boundary to account for the strategic behavior of the agents who should not receive the positive action. We will add a clarifying sentence in our revision. [*Algorithm 1 line 4, "... and data S_t", do you mean "data D_t"?*] Yes, thank you for pointing this out. [*Can the authors move algorithm 4 (which is part of algorithm 2) into the main text?*] Yes – we will do so in the revision.
Summary: The paper considers contextual bandit problem in which the users can strategically modify their context for its own sake. It provides sublinear regret algorithms by exploiting the budgeted strategic structure of the agents, against stochastically chosen agents. It further obtains sublinear regret algorithms against adversarially chosen agents by considering a variant of EXP3. Strengths: The paper studies interesting problem setting and makes reasonable contribution. Weaknesses: Although the analysis seems technically thorough, intuition behind the solution is pretty straightforward: incentivize truthful action for certain amount of times, estimate $\theta$, strictly separate strategically modified context by conservatively setting decision boundary based on the effort budget. Also, the algorithm seems to heavily depend on the information on the agents' budget. Detailed comments are in Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What happens if the principal can also optimize over the choice of decision policy, i.e., not just using thresholded rule? - In L254, even though a small fraction of agents may strategically modify their contexts, it seems fairly intuitive that the estimates would be inconsistent, so wonder why the authors think this to be a surprising fact - is it meant to be "far" from being consistent, not just inconsistent? - L82 consists a lot of citations. It looks bad to me, at least parsing them/giving more contexts to each reference/removing part of them would be helpful. It's a bit distracting in its current form and gave me no information. - It seems that the algorithm essentially needs to know the budget $\delta$ in advance. What happens if the algorithm has misspecified budget? Is the algorithm fairly robust? - In L170, bandit feedback setting is said to be more challenging - why is it more challenging than the one-sided feedback? Also, the authors argue that all their algorithms can be made to be applicable to the bandit setting - yes it's good to know that, but I wonder if there's more general results, e.g., any reduction from one setting to the other, or if the authors have thought about any black-box reduction from one algorithm to another. - I also wonder why the standard adversarial contextual bandit algorithm does not work, instead of Algorithm 1, possibly with some slight modifications. - I'd like to ask on how the authors think about the case if the agents observe some noisy context about themselves. Minor comments - I found the presentation of Stackelberg regret and Proposition 2.4 a bit redundant, as it does not appear in the main paper but there. - In Algorithm 1, what is data $S_t$ - it seems it's not updated at all? - L290, typo: withing - L298, duplicated threshold Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Detailed comments are in Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Please find out responses to your comments and questions below. [*What happens if the principal can also optimize over the choice of decision policy, i.e., not just using thresholded rule?*] This is an interesting question. While we are able to obtain no-strategic-regret by only considering linear decision policies, it may be possible in theory to obtain better rates if non-linear decision policies are deployed. However this appears challenging, as we point out in Section 3.1 “While our results do not rule out better strategic regret rates in d for more complicated algorithms (e.g., those which deploy non-linear policies), it is often unclear how strategic agents would behave in such settings, both in theory (Definition 2.1 would require agents to solve a non-convex optimization with potentially no closed-form solution) and in practice, making the analysis of such nonlinear policies difficult in strategic settings.” [*In L254, even though a small fraction of agents may strategically modify their contexts, it seems fairly intuitive that the estimates would be inconsistent, so wonder why the authors think this to be a surprising fact - is it meant to be "far" from being consistent, not just inconsistent?*] On line 253 we say that this phenomenon is unsurprising (not “surprising” as the reviewer mentions) - we include it because it motivates the study of “strategy-aware” algorithms. [*L82 consists a lot of citations. It looks bad to me, at least parsing them/giving more contexts to each reference/removing part of them would be helpful. It's a bit distracting in its current form and gave me no information.*] We chose to include the citations to show that there is a lot of work in this area which does not consider one-sided feedback. However we see your point, and will give more context to these references in an appendix. [*It seems that the algorithm essentially needs to know the budget $\delta$ in advance. What happens if the algorithm has misspecified budget? Is the algorithm fairly robust?*] Algorithm 1 is fairly robust to overestimates of the budget $\delta$, in the sense that (1) it will still produce a consistent estimate for $\theta^{(1)}$ (albeit at a rate which depends on the over-estimate instead of the actual value of $\delta$) and (2) it will incur a constant penalty in regret which is proportional to the amount of over-estimation. Algorithm 4 will also incur a constant penalty in regret which is proportional to the amount of misspecification (either over- or under-estimation). We will add a short discussion on this in the revision after the presentation of our bounds. [*In L170, bandit feedback setting is said to be more challenging - why is it more challenging than the one-sided feedback? Also, the authors argue that all their algorithms can be made to be applicable to the bandit setting - yes it's good to know that, but I wonder if there's more general results, e.g., any reduction from one setting to the other, or if the authors have thought about any black-box reduction from one algorithm to another.*] Bandit feedback is (slightly) more challenging in our setting, since an additional parameter ($\theta^{(0)}$) needs to be estimated. The algorithms for both feedback settings are more-or-less the same: the only difference is to estimate and use this additional parameter in the bandit feedback setting. We chose to present our results in terms of apple tasting feedback since that is the type of feedback present in our motivating examples. [*I also wonder why the standard adversarial contextual bandit algorithm does not work, instead of Algorithm 1, possibly with some slight modifications.*] We are not sure which algorithm you are referring to when you say “standard adversarial contextual bandit algorithm”. If you mean EXP4, this requires a set of (strategy-aware) experts for which we know the action they would recommend at each round. But note that the action per expert depends on the agent’s context, which is strategically changed as a response to the chosen expert. In other words, unless all agents are truthful with their contexts, we cannot infer the actions that all the experts would recommend per round (which is required for the EXP4 update). If you are referring to the more recent line of work on “adversarial contextual learning” (e.g., “Efficient Algorithms for Adversarial Contextual Learning” by Syrgkanis et al., “Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits” by Syrgkanis et al., “BISTRO” by Rakhlin & Sidharan), they require advanced knowledge of the distribution over contexts to give as input to their algorithms (an assumption we do not require). Moreover in the strategic setting, the distribution over contexts changes as a function of the algorithm deployed by a learner, since different algorithms will cause agents to strategize in different ways. Finally note that Algorithm 1 enjoys a better dependence on $T$ in the regret bound when compared to algorithms in this line of work ($T^{1/2}$ versus $T^{2/3}$ and $T^{3/4}$). [*I'd like to ask on how the authors think about the case if the agents observe some noisy context about themselves.*] If agents observe a noisy version of their context (instead of their true context), then a best-response similar to our “trembling hand tiebreaking” assumption (see Line 133 in the main body and Appendix D for more details) may be reasonable since in this case, an individual who strategizes may want to “play it safe” and “overshoot” the decision boundary by a bit to account for the fact that they do not have perfect knowledge of their context. Finally, thank you for pointing out the typos and suggestions. In particular, data $S_t$ is a typo - this should be data $D_t$. We will correct this and all others mentioned. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have one more quick question that just comes to my mind. Can you also discuss some connections to perturbation technique used in Learning in Stackelberg Games with Non-myopic Agents, EC'22 (Section 4.3), which refers back to Stochastic multiarmed bandits with unrestricted delay distributions, ICML'21? I know that the setup is a bit different (contextual, one-sided, etc) but the objective of techniques seems related (to view the strategically modified feedback as a perturbed output of bandit problem). --- Reply to Comment 1.1.1: Comment: *Can you also discuss some connections to perturbation technique used in Learning in Stackelberg Games with Non-myopic Agents, EC'22 (Section 4.3), which refers back to Stochastic multiarmed bandits with unrestricted delay distributions, ICML'21?* In Section 4.3 of [HLNW22], the authors consider a stochastic bandit setting in which a sequence of *rewards* is perturbed within some ball of radius $\delta$, possibly in an *adversarial* way. In contrast, in our setting the sequence of *contexts* are perturbed within some ball of radius $\delta$, in a *strategic* way (i.e. given in Definition 2.1). As a result, while both problem settings require the learner to deal with perturbations, the tools and techniques used to obtain no-regret in the two settings differ significantly. For example, their algorithm SuccessiveEliminationDelayed relies on learning confidence bounds for a set of arms using all of the delayed feedback, while our Algorithm 1 relies on greedy estimation of the relationship between contexts and rewards, using only data which has not been strategically modified. [HLNW22]: Nika Haghtalab, Thodoris Lykouris, Sloan Nietert, Alex Wei. Learning in Stackelberg Games with Non-myopic Agents, EC 2022.
Summary: They consider the problem of online learning with apple tasting feedback where the sequence of arriving agents may strategically modify their features (contexts). They show how to achieve sublinear strategic regret compared to the best policy in the hindsight if agents were reporting truthfully. Their main result is $\tilde{O}(\sqrt{T})$ strategic regret when the sequence of agents arriving is stochastic. Their regret bound depends exponentially on d the context dimension. They show how to mitigiate this dependency and achieve a regret bound that depends polynomially on d but with a worse dependence on T. The main idea to to use a modified version of explore-then-commit algorithm that was introduced in the prior work. Finally, they extend their results to a sequence of agents chosen by an oblivious adversary that achieves sublinear regret however it requires an exponentially large amount of computation at each round. Strengths: I think the paper is written nicely and the results are nice. The algorithms seem to be a natural extension of the non-strategic setting. The model is interesting and the presentation is great. Weaknesses: I think the paper is written nicely and the results are nice. The algorithms seem to be a natural extension of the non-strategic setting. Since they are using the linear thresholds model, they need to shift the decision boundary to account for strategic behavior and a similar set of ideas have been studied in the previous work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It might be helpful to add a paragraph on how your bounds differ from the non-strategic case. Prop 2.4. last line, it might be helpful for the reader to argue why the first term is at most 0. Line 218, you can ensure the data is clean since it is strictly above the threshold? Thm 3.3. it might be helpful to also remind the reader what c_0 is. Another open question is to get regret bounds when the agents are picked by an adaptive adversary. Are there any connections between the trembling hand model that you are describing and the \eps-best response model considered by [Haghtalab, Lykouris, Nietert, Wei' EC22]? Another piece of work related to your bandit feedback could be the work by [Ahmadi, Blum, Yang' EC23] where in one of their models they propose a modification of EXP3 algorithm in the strategic setting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Please find our responses to your questions below. [*It might be helpful to add a paragraph on how your bounds differ from the non-strategic case.*] We would be happy to add such a paragraph in Section 1.1 where we discuss our contributions. At a high level, our main bound (i.e., the one for stochastic contexts) has a worse dependence on the context dimension, which arises as a direct result of the agents’ ability to strategize. [*Prop 2.4. last line, it might be helpful for the reader to argue why the first term is at most 0.*] The first term is at most zero since the principal’s reward from the optimal policy when the agent strategizes is at most their optimal reward when agents do not strategize. We will add a sentence saying this in our next revision. [*Line 218, you can ensure the data is clean since it is strictly above the threshold?*] Yes, we will clarify this in the revision. [*Thm 3.3. it might be helpful to also remind the reader what c_0 is.*] $c_0$ is a lower bound on the bounded density ratio and is defined in Assumption 3.1. We will clarify this in our theorem statement. [*Another open question is to get regret bounds when the agents are picked by an adaptive adversary.*] Modifying Algorithm 3 to be based on EXP3.P instead of EXP3 would allow us to handle this setting, and we will add a comment on that after the discussion on Algorithm 3. [*Are there any connections between the trembling hand model that you are describing and the \eps-best response model considered by [Haghtalab, Lykouris, Nietert, Wei' EC22]?*] Yes, the two models are similar at a high level. In particular, HLNW22 study a Stackelberg game setting in which the follower best-responds $\epsilon$-optimally. In our trembling hand setting, the strategic agent can also be thought of as $\epsilon$-best responding, although it is important to note that an $\epsilon$-best response for the agent in our setting will cause them to only strategize more than necessary (at least for sufficiently small epsilon). We will add a discussion about this connection (and a reference to HLNW22) in the revision. [*Another piece of work related to your bandit feedback could be the work by [Ahmadi, Blum, Yang' EC23] where in one of their models they propose a modification of EXP3 algorithm in the strategic setting.*] Thanks for sharing this reference. While related in the sense that they study an online strategic learning problem, ABY23 focus on the full feedback setting, whereas our primary focus is strategic learning under apple tasting and bandit feedback. Note that when ABY23 refer to ‘bandit feedback’ in Sec. 6.1, they mean that they only see the outcome under the deployed classifier (vs all possible classifiers), while we use `bandit feedback’ to refer to the fact that we only see feedback when a positive decision is made. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I went through them and do not have any other questions at this point.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies an online learning problem with one-sided (apple-tasting) feedback. At each round $t\in[T]$, an agent with a $d$-dimensional context vector $x_t$ arrives. The principal chooses a policy $\pi_t$ to map the context to binary decisions. Given $\pi_t$, the agent best responds strategically by modifying their context to $x'_t$ where $||x'_t-x_t||\leq \delta$. Then, the principal receives a one-sided, linear feedback in $x_t$. The goal is to design a policy with sub-linear strategic regret that compares the principal's reward with that of the optimal policy had agents reported their contexts truthfully. The authors propose algorithms for stochastic and adversarial settings. Strengths: - Online learning with strategic agents has many real-world applications and the results of this work could be applied to automatic decision-making processes such as lending and hiring. Therefore, the problem is very well-motivated. - The paper is very well-written, in particular, the proof sketches provided in the paper give a very clear high-level picture of the techniques and ideas used to analyze the performance of the algorithms. - The authors have done an excellent job comparing and contrasting their paper with prior works, it is clear how this paper contributes to this literature. Weaknesses: - While the proposed algorithms are a great first step towards solving this problem, they are sub-optimal from different aspects. First, Algorithm 1 does not make any use of unclean data and it simply discards them. While the authors have shown that using unclean data for estimation is not useful, they could be used with some confidence level to rule out some of the underperforming policies. Moreover, Algorithm 3 for the adversarial setting is practically not useful because of its exponential computational complexity. - Based on the motivating examples that are provided in the introduction, I don't see why agents strategically modifying their context vector is necessarily a bad thing. For instance, paying bills on time and maintaining low credit utilization to increase credit scores (which is used for lending) should be incentivized (rather than discouraged). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the reasoning behind a bounded $\ell_2$ norm perturbation of size $\delta$ (instead of a different norm or a packing-type constraint) as the effort budget? Could you explain this in the context of a motivating application? - Why is $r_0$ missing in the linear thresholding policy of Algorithm 3? - Instead of considering each $d$-dimensional vector in $[0,1]^d$ (or a discretized grid of such vectors) as experts in Algorithm 3, is it possible to assume a structure on the policies and solve the resulting online structured learning problem (as it's done in the following papers)? Koolen, Wouter M., Manfred K. Warmuth, and Jyrki Kivinen. "Hedging Structured Concepts." COLT. 2010. Cohen, Alon, and Tamir Hazan. "Following the perturbed leader for online structured learning." International Conference on Machine Learning. PMLR, 2015. - Algorithm 4 is mentioned multiple times in the paper, but it's not provided in the main text. It'd be great to either have the algorithm or a short description of it in the paper. - Is it possible to relax the assumption that the variance $\sigma^2$ is known? How would such a relaxation affect the results? -------------------------------- I've read the authors' rebuttal, thanks for addressing my questions and concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are clearly discussed in the conclusion. This is a theoretical work and a discussion of potential negative societal impacts is not necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. Please find our answers to your questions below. [*What is the reasoning behind a bounded ℓ2 norm perturbation of size $\delta$ (instead of a different norm or a packing-type constraint) as the effort budget? Could you explain this in the context of a motivating application?*] We chose to model the agent’s effort budget with respect to the l2 norm in order to allow for an easier comparison to both (1) the existing literature on strategic learning (e.g., “Learning strategy-aware linear classifiers” by Chen et al., 2020) and (2) the adversarial robustness literature (see, e.g. “Certified Adversarial Robustness via Randomized Smoothing” by Cohen et al., 2019), in which an adversary is allowed to perturb the input to a machine learning algorithm such that the perturbed input is within an l2 ball of the original input. We conjecture that our results are generalizable to the setting under which the agent’s modification is constrained by an arbitrary (known) p-norm. However, this is non-trivial because the naive reduction from the l2 norm to an arbitrary p-norm is not tight (indeed, we only know of not matching upper and lower bounds), and so just substituting the appropriate upper/lower bound may result in false positives/negatives. Finally, packing-type constraints have not been studied in the literature on strategic learning to the best of our knowledge, but we believe that this would be an interesting direction for future research. [*Why is $r_0$ missing in the linear thresholding policy of Algorithm 3?*] $r_t(a_{t,e_t} ) = r_0$ whenever the principal takes action 0. Note that according to our apple tasting feedback whenever the principal takes action 0, they do not observe feedback. In Lines 339-340 we explain how the linear thresholding policy would change if the principal were to observe feedback whenever they chose action 0 too (i.e., bandit feedback). [*Instead of considering each $d$-dimensional vector in $[0,1]^d$ (or a discretized grid of such vectors) as experts in Algorithm 3, is it possible to assume a structure on the policies and solve the resulting online structured learning problem (as it's done in the following papers)?*] We view structured learning in strategic settings as an important direction for future work as, to the best of our knowledge, it has not yet been considered even in the full feedback setting. It may be possible to adapt Algorithm 1 to a structured policy class, if we are given access to an optimization oracle for that class. This is because Algorithm 1 computes a greedy estimate of the optimal policy class in the non-strategic setting, then shifts it to account for the strategic behavior of the agents. Using an optimization oracle, one could do something similar by first estimating the optimal policy using the optimization oracle, then modify the estimated policy to account for strategic behavior. [*Algorithm 4 is mentioned multiple times in the paper, but it's not provided in the main text. It'd be great to either have the algorithm or a short description of it in the paper.*] Thank you for this suggestion, which was brought up by other reviewers as well. We will add Algorithm 4 to the main body of the paper.. [*Is it possible to relax the assumption that the variance $\sigma^2$ is known? How would such a relaxation affect the results?*] Yes, thanks for pointing this out. An upper bound on the variance should suffice, as we only require knowledge of $\sigma^2$ when setting algorithm hyperparamters. In this case the same bounds will hold, but with the upper-bound on $\sigma^2$ in place of $\sigma^2$. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response to my questions. I understand your motivation for choosing bounded $\ell_2$ norm perturbations, however, I believe an $\ell_2$ ball centered around the original context is not necessarily an accurate representation of the strategic behavior of the agents. For instance, the agents might be able to change some of their contexts more easily than others (an ellipsoid rather than a sphere) or there might be different costs per unit of changes to each of their contexts (motivating packing constraints). Also, it'd be great if you commented on the second point in the "Weaknesses" section of my review (regarding why agents strategically modifying their context vector is necessarily a bad thing). --- Reply to Comment 1.1.1: Comment: [*$\ell_2$ ball vs ellipse*] Thanks for pointing this out. Our results extend readily to the setting in which the agent's effort constraint takes the form of an ellipse rather than a sphere. Under this setting, the agent effort budget constraint in Definition 2.1 would be $||A^{1/2} (x' - x_t)||_2 \leq \delta$, where $A$ is some positive definite matrix. (We note that such an effort budget is also considered in [10], although under a different setting than ours.) If the matrix $A$ is known to the principal, this can be viewed as just a linear change in the feature representation, and therefore all of our results will carry over. We will make a note of this in the revision. [*why agents strategically modifying their context vector is necessarily a bad thing*] You are correct that there are some settings in which it makes sense to consider both undesirable strategizing (i.e. "gaming") and desirable strategizing (i.e. "improvement"). Indeed, there are several works in the literature on strategic learning which consider both gaming and improvement (e.g., [9, 32]). In this work, our main goal is to study the effects of apple tasting feedback when learning under agent incentives. As a result, we chose to focus on designing policies for the principal that are robust to manipulation/gaming (a common goal in the literature, see e.g. [15, 21]). However, we view such an extension to both gaming and improvement as an interesting direction for future research.
null
null
null
null
null
null
Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture
Accept (oral)
Summary: The paper introduces a new neural network layer which runs efficiently on modern GPUs, and exhibits strong performance against state-of-the art on several benchmarks. The layer is based on Monarch matrices, introduced in [7]. Monarch matrices use permutation matrices, and block diagonal matrices to represent dependancies across features and temporal dimensions. Inspired by butterfly matrices in FFT, this matrix parameterization can represent anything from convolutions to fully connected matrices, depending on their order. The Monarch Mixer layer introduced in this paper uses two monarch matrices as well as a matrix (K) that is used in point-wise multiplication. Performance is compared on language and image classification, replacing transformer and fully connected layers with the M2 layer. Finally, the paper presents a theoretical derivation of how this layer can be modified for use in causal language tasks, while maintaining its computational efficiency. Strengths: Technically, this paper makes a strong contribution by proposing a computationally efficient layer. The computational performance is benchmarked on modern GPUs. I really liked the discussion on factors affecting runtime performance on modern GPUs -- this is a really valuable introduction to this topic. The proposed layer achieves superior performance, with fewer trainable parameters, in less time. Like MLPMixer, it also does away with the attention mechanism of transformers -- which suggests M2 is also a good architectural inductive bias -- perhaps competitive with attention? Comparing against [7], I would say the largest technical contribution of this paper is on how to modify the M2 layer perform in the causal setting. Weaknesses: The main weakness of the paper is that it is somewhat hard to read/understand without having to read [7]. Section 4 in particular is quite dense. It seems that the authors struggled to fit the paper within the NeurIPS page limit. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: -Does the expressivity of M increase with the order p? Perhaps this is obvious, but it should be stated explicitly in the paper. Can M be used to express any dense matrix? -Why do you think your model exceeds the performance of other state-of-the art models without using attention? I always thought one of the most important aspects of attention was that it afforded permutation invariance, does M2 serve a similar purpose? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations were addressed in second paragraph of Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback on the technical contributions of our paper. We are glad that you found the GPU performance discussion insightful, and we appreciate your constructive comments on how to improve the paper. **W1. More intuitive explanation of Monarch matrices.** Thank you for your suggestions on how to improve the clarity of the paper. We plan to use the extra space in the camera ready to include both a more complete description of Monarch matrices and the motivation behind their definition. We plan to motivate the Monarch matrices via the FFT algorithm, as follows: The motivation behind the Monarch matrix is to adapt and generalize the FFT algorithm. The FFT algorithm splits an FFT of size N into smaller FFT’s over portions of the input, interleaved with permutations. More precisely, let $F_N$ denote the Fourier transform of size $N$, and assume N is a perfect square for simplicity. The FFT breaks down $F_N$ as follows: $F_N = P F_L P D P F_R P,$ where $F_L$ and $F_R$ are block-diagonal matrices whose blocks are made of $F_\sqrt{N}$, $D$ is a diagonal matrix, and $P$ is a permutation that reshapes a 1D input into $\sqrt{N} \times \sqrt{N}$, and takes the transpose. A Monarch matrix generalizes this computation pattern by “rolling in” the diagonal matrix, and letting the blocks in the block-diagonal matrices be arbitrary instead of fixed to an FFT: $M = PLPRP$ This additional flexibility allows Monarch matrices to express a wider class of structured matrices than the FFT (but they are not as completely expressive as a dense matrix). In our paper, we also generalize Monarch matrices past the order 2 phase, so there can be more than two block-diagonal matrices interleaved with permutations (Figure 1, left in our original submission). **Q1. Expressivity of $M$ with the order $p$.** This is a little subtle - the expressivity actually goes _down_ with increasing $p$, since we decrease the sizes of the blocks (block size $b$ is $\sqrt{N}$ for order 2, $\sqrt[3]{N}$ for order 3: for general $p$ the block size is $b=\sqrt[p]{N}$). This in turn implies that a $p$-variate Monarch has $O\left(pN^{1+1/p}\right)$ many parameters i.e. the number of parameters decreases as $p$ increases and hence its expressivity goes down (this follows e.g. from a counting argument). $p=1$ gives arbitrary dense matrix, but that is because in that case the block size is now $b=N$ and hence is a trivial case. For $p>1$, a single $p$-variate Monarch matrix cannot express an arbitrary matrix. However, one can show that one can express an arbitrary $N\times N$ matrix as a product of $m=O(N)$ matrices $M_1,\dots,M_m$ where each $M_j$ is a $p$-variate Monarch matrix (this follows from a known result that an arbitrary matrix can be represented a product of O(N) Toeplitz matrices). We will add a discussion of these properties to the main body when we introduce $p$-order Monarch matrices. **Q2. Why does M2 outperform attention-based models?** There are two pieces – first, we build on prior work to build sub-quadratic replacements for attention. Second, we replace the MLPs with sub-quadratic alternatives, which achieves the same performance with the same height/width but fewer overall parameters. M2 builds on prior work studying how to replace attention with a sub-quadratic alternative while maintaining high quality. Many of these models use a combination of long and short convolutions with elementwise multiplication, e.g. [1, 2, 3, 4]. These convolutions are often computed with an FFT, which means that they can be expressed using Monarch matrices and elementwise multiplication. We build on the insights from these architectures when using Monarch for sequence mixing in our M2 models with an alternative that scales sub-quadratically in sequence length while maintaining high quality. In addition, M2 uses Monarch matrices to scale sub-quadratically in the model dimension by replacing MLPs. Our results – that the dense layers in MLPs can be replaced by sparse(r) matrices without losing quality – may suggest that the current generation of models is overparameterized, and that there exist much more efficient architectures to develop. We are excited by these possibilities, and we look forward to building on these ideas in the future. [1] Long Range Language Modeling via Gated State Spaces. Harsh Mehta, Ankit Gupta, Ashok Cutkosky, Behnam Neyshabur. ICLR 2022. [2] Pretraining Without Attention. Junxiong Wang, Jing Nathan Yan, Albert Gu, Alexander M. Rush. ACL 2023. [3] Hungry Hungry Hippos: Towards Language Modeling with State Space Models. Daniel Y. Fu, Tri Dao, Khaled K. Saab, Armin W. Thomas, Atri Rudra, Christopher Ré. ICLR 2023. [4] Hyena Hierarchy: Towards Larger Convolutional Language Models. Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré. ICML 2023. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: Thank you for your clarifying comments. I suggest you incorporate them in your paper. --- Reply to Comment 1.1.1: Comment: Thank you, we have added it to our updated manuscript!
Summary: This paper takes a fresh approach by addressing the issue of high complexity in current neural networks. It points out that the computational complexity of Transformers is quadratic with respect to both the sequence length and the feature dimension. Previous papers primarily focused on reducing the complexity related to sequence length, but this paper is the first to propose a method that reduces complexity for both sequence length and feature dimension. The specific approach used is the utilization of second-order Monarch Matrices, with the model structure referred to as the Monarch Mixer. Additionally, the authors introduce a novel initialization method for the Monarch Matrices, enabling them to handle causal l language modeling. The effectiveness of this approach is validated in the areas of non-causal language modeling, causal language modeling, and image classification. Strengths: Indeed, it is novel to consider the optimization of complexity from both the sequence dimension and the feature dimension. Furthermore, initializing the model for the causal scenario poses a definite challenge, and the authors have successfully accomplished this task. Weaknesses: There aren't many weaknesses, for specific questions, please refer to the **Questions** section. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. I'm not very familiar with the Monarch Matrix, but is its core idea to use the product of block-diagonal matrices and permutation matrices as a replacement for the dense matrix? 2. What is the motivation behind the Monarch Matrix? Despite having significantly fewer parameters, it seems to perform comparably to dense matrices in small-scale models. Can this conclusion be extended to models larger than 10 billion parameters? 3. In Line 257, "We set the Monarch matrices to DFT and inverse DFT matrices, to simulate long convolutions [15, 37], and do not learn them." Does this refer to setting the block-diagonal matrices of the Monarch Matrix as DFT matrices? 4. The experiments for Non-Causal Language Modeling and Image Classification should be more comprehensive, such as considering the Monarch matrices as learnable. 5. Could you provide a more intuitive explanation of the initialization for the causal scenario? From examining the code, it seems like the Monarch Matrix is initialized as a lower triangular matrix? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and questions. These questions have helped us improve the presentation of our paper. We provide a more detailed explanation of Monarch matrices below, which we plan to add to the paper. **Q1. Monarch Motivation.** The motivation behind the Monarch matrix is to adapt and generalize the FFT algorithm. The FFT algorithm splits an FFT of size N into smaller FFT’s over portions of the input, interleaved with permutations. More precisely, let $F_N$ denote the Fourier transform of size $N$, and assume N is a perfect square for simplicity. The FFT breaks down $F_N$ as follows: $F_N = P F_L P D P F_R P,$ where $F_L$ and $F_R$ are block-diagonal matrices whose blocks are made of $F_\sqrt{N}$, $D$ is a diagonal matrix, and $P$ is a permutation that reshapes a 1D input into $\sqrt{N} \times \sqrt{N}$, and takes the transpose. A Monarch matrix generalizes this computation pattern by “rolling in” the diagonal matrix, and letting the blocks in the block-diagonal matrices be arbitrary instead of fixed to an FFT: $M = PLPRP$ This additional flexibility allows Monarch matrices to express a wider class of structured matrices than the FFT (but they are not as completely expressive as a dense matrix). In our paper, we also generalize Monarch matrices past the order 2 phase, so there can be more than two block-diagonal matrices interleaved with permutations (Figure 1, left in our original submission). **Q2. Scaling Results.** We have seen promising initial scaling results – in the common response, we have reported results for both M2-BERT-Base and M2-BERT-Large. In our original submission, we also saw similar scaling performance on causal language modeling with GPT2-s and GPT2-m equivalent models (Table 9 in the main paper). We look forward to continuing to scale these models and seeing how well the trends hold. **Q3. Monarch DFT.** A Monarch matrix can exactly express both a DFT and an inverse DFT (see Appendix F, corollary 6 in the original submission for the exact parameterization – they are similar to DFT matrices, but with slight modifications). For these experiments, we set the Monarch matrices to express the DFT and inverse DFT, respectively. **Q4. Learnable Monarch.** Thank you for the suggestion to extend the experiments for learnable Monarch matrices. Please see the common response for additional experiments along these lines. Making the Monarch matrices learnable yields small benefits in quality. **Q5. Causal Monarch Interpretation.** One way to interpret a Monarch matrix is to view it as evaluating a polynomial at a set of evaluation points $(a_i)$. When the Monarch matrix is used in a convolution, it is equivalent to multiplying two polynomials, by multiplying their evaluations $h(a_i) = f(a_i)g(a_i)$. The causal parameterization ensures that the resulting polynomial $h(a_i)$ is causal in $f$ – i.e., its coefficients do not depend on coefficients of $f$ that come later in the sequence.
Summary: The Monarch Mixer (M2) combines MLP mixer and Conv mixer and yields a new family of mixers that is formalized in terms of Monarch matrices. the approach is novel and reminds me of an extension of [15] in their reference. The main advantage of M2 is in its sub-quadratic computation capability. The authors evaluate their method on a set of large language models and prove their framework functions properly on meaningful and challenging tasks. The paper is well-written and easy to follow. Strengths: The paper stands out in its clear and proper presentation. The idea is well motivated from a hardware perspective, explained intuitively, and studied theoretically. Evaluation of the method on causal and non-causal large language models is an asset and demonstrates a clear potential of the method. Weaknesses: - The new approach is only benchmarked on Transformer tasks, while speech applications look relevant but are ignored. - It would be fair to see a comparison of inference latency and metric performance of the M2-based BERT with a configuration of BERT that has a smaller number of parameters, but the same number of FLOPs. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Table 3 mentions a "Transformer" model. What is the configuration of the model? - Tables 4 to 7 mention BERT-s. What is BERT-s? What are the details of its architecture? - Table 2 seems to be incomplete. The FLOP utilization for dense matmul is missing from Table 2. - Caption of Table 4 mentions GPU, however the body of the Table contains no inference results on GPU. Did the authors perform inference on CPU only? Why GPU inference is not reported in Table 4? - A comparison with SwinMLP-B [1] and other SWIN-v2 models is more informative than the relatively older ViT model. - What is BERT-s? What is ViT-s? please provide the model details. [1]: Zheng, Hao, Guohui Wang, and Xuchen Li. "Swin-MLP: a strawberry appearance quality identification method by Swin Transformer and multi-layer perceptron." Journal of Food Measurement and Characterization 16.4 (2022): 2789-2800. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: While the method clearly shows that the method works well on Transformer models, convolutional models, and speech applications are ignored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful suggestions. We hope that the additional experiments reported in the common response have improved the paper, and we look forward to further discussion. Here, we answer the specific weaknesses and questions raised in your review. **W1. Evaluation on speech applications.** Thank you for the suggestion to evaluate on speech applications. Please see the experiment on speech commands in the common response. We hope this result helps provide further evidence for the generality of our method. **W2. Comparison against parameter-matched BERT models**. Thank you for your suggestion to evaluate M2-BERT against parameter-matched BERT models. We have reported the results of these experiments in our common response. The 80M BERT model underperforms M2-BERT in quality on GLUE. A highly-optimized 80M BERT achieves higher throughput than M2-BERT on short sequences, but underperforms on inputs longer than 1K. **Q1. Architecture in synthetics.** For these synthetics, all models, including the Transformer, are small two-layer models with hidden dimension 64. **Q2 & Q6. BERT-s, ViT-s.** “BERT-s” and “ViT-s” in the submission are typos, we mean BERT-base and ViT-B (the -s terminology is the equivalent model for GPT2-s). These are now fixed in the draft. **Q3. Dense Matmul FLOP Util.** We report FLOP utilization of dense matmul here, and we have added it to our updated manuscript: | | **4K** | **16K** | **65K** | |-----:|:-----:|:-----:|:-----:| | **A100** | 63.0% | 78.0% | 80.0% | | **4090** | 74.6% | 96.7% | 97.9% | **Q4. GPU Inference.** We have reported the GPU inference results in the common response, and we will add them to Table 4 in the updated manuscript. **Q5. Swin Comparisons.** Thank you for the suggestion to compare to Swin-v2 and Swin-MLP. We have run these experiments and reported them in the common response (Table **4** in the accompanying PDF). --- Rebuttal Comment 1.1: Title: Thanks Comment: I would like to thank the authors for their rebuttal. I keep my rating.
null
null
Rebuttal 1: Rebuttal: # Common Response We thank all reviewers for their time and valuable comments, which have helped us improve our paper. In this paper, we introduce Monarch Mixer, a new architecture that is hardware-efficient and sub-quadratic in both sequence length and model dimension. We demonstrate that Monarch Mixer can be used as a drop-in replacement for attention and MLP in Transformers in BERT-style, GPT-style, and ViT-style modeling, matching quality with up to 27% parameter reduction and up to 9x faster inference for long sequences. We are excited to take a first step with this work in developing new architectures that are fundamentally more efficient than Transformers while maintaining quality. We are glad to have received positive feedback on the motivation (reviewers **xUjR**, **zN2M**), clear technical contribution (reviewers **qwKK**, **zN2M**), execution and experiments (reviewers **xUjR**, **zN2M**), and overall clarity of presentation (reviewers **xUjR**, **qwKK**). In our general response, we are excited to report the results of some additional experiments in pretraining quality, as well as experiments requested by the reviewers: * Stronger BERT results: * M2 achieves GLUE performance that matches BERT-Base from (Devlin et al 2018), with 27% fewer parameters and up to 9X faster GPU inference time for long sequences (requested by reviewer **xUjR**) * Scaling up to BERT-Large equivalent – M2-BERT-Large matches BERT-Large in quality with 24% fewer parameters and achieves up to 4X faster GPU inference time (demonstrating scaling performance, as requested by reviewer **qwKK**) * Benchmarks against smaller BERT models (requested by reviewer **xUjR**) * Swin-M2: matching ImageNet accuracy with 32% fewer parameters when replacing attention and MLPs in Swin-V2 with Monarch Mixer, as a drop-in replacement (requested by reviewer **xUjR**) * Speech: M2 matches state-of-the-art in classification accuracy when classifying raw 16 kHz speech signals on the SpeechCommands task (requested by reviewer **xUjR**) * Experiments with learnable Monarch matrices in the sequence mixer for CIFAR, achieving 1.5 points in lift from learning the matrices (requested by reviewer **qwKK**) The results tables for these experiments are in the accompanying PDF for the rebuttal. We plan to include these experiments in our updated manuscript and welcome further discussion on how to improve the paper during the discussion period. ## BERT Since our initial submission, we have improved our BERT pretraining formula via improvements to pretraining hyperparameters and data. We are happy to report stronger downstream GLUE results, competitive with those from the official BERT-Base model trained by (Devlin et al 2018). We have also included an 80M BERT trained using our formula as a baseline, following the suggestion from reviewer **xUjR**. Table **1** in the accompanying PDF shows the results. M2-BERT-base is competitive with the Devlin et al BERT-base, with 27% fewer parameters. When scaled up to parameter-match BERT-base, M2-BERT outperforms BERT-base by 1.3 GLUE points on average. We have also scaled up to a BERT-Large equivalent, which is competitive with the BERT-Large trained by (Devlin et al 2018). Table **2** in the accompanying PDF shows that M2-BERT-large is competitive with BERT-large with 24% fewer parameters. These results suggest that M2 can scale well. We are excited to scale up to even larger models in future work (as suggested by reviewer **qwKK**). **BERT GPU Inference** Next, we report additional results on GPU inference times for different sequence lengths (addressing a question by reviewer **xUjR**). Inference times are reported as throughput, in tokens/ms on a single A100-40GB. Table **3** (top) in the accompanying PDF shows that M2-BERT-base achieves higher throughput than even highly-optimized BERT models, and achieves 9.0X higher throughput than a standard BERT-base model for long (4K) input sequences. Table **3** (bottom) in the accompanying PDF reports throughput against the 80M BERT as well. When parameter-matched, M2-BERT is slower than the most highly-optimized attention kernels for short sequences, but still faster for long sequences. Benchmarks for BERT-large and M2-BERT-large are omitted for space in the rebuttal, but the trends are similar (but the HF model OOM’s earlier, leading to a maximum speedup over HF of 4.34X at sequence length 2K). ## Swin-M2 Experiments Reviewer **xUjR** suggests comparing ImageNet performance against Swin-V2 and Swin-MLP. Table **4** in the accompanying PDF reports the results of replacing attention and the MLP in Swin-V2, using M2 as a drop-in replacement. Surprisingly, Swin-M2 outperforms Swin-MLP-B, is competitive with Swin-V1-B, and comes within 1 point of Swin-V2-B – even without any hyperparameter tuning or architecture adjustment from our ViT formula. We expect that performance may improve further with hyperparameter tuning specific to M2. These results provide additional evidence that M2 is a strong drop-in replacement for attention and MLPs in various architectures. ## Speech Applications Reviewer **xUjR** suggests evaluating M2 on speech applications. Table **5** in the accompanying PDF presents the performance of M2 on Speech Commands-10, a speech classification task over raw 1-second clips sampled at 16 kHz. M2 is competitive with state-of-the-art architectures on this task. ## Learnable Monarch Matrices in Sequence Mixer Reviewer **qwKK** suggests extending the experiments to include the case where the Monarch matrices are learnable. In our submission, the monarch matrices in the state mixer (i.e. MLP) are already learnable. Table **6** in the accompanying PDF presents an experiment evaluating sequence mixers (i.e. attention) with learnable monarch matrices on sequential CIFAR. Learning the Monarch matrices in the sequence mixer yields 1.5 points of lift. We look forward to further exploring this regime in future work. Pdf: /pdf/63de5342b169e4531d12f157393fc540e8873063.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing Motion Deblurring in High-Speed Scenes with Spike Streams
Accept (poster)
Summary: This paper proposed the first deblurring method that uses two types of inputs: RGB frames and spike streams. Two synthetic datasets are introduced. They are also used for training and evaluating the proposed method. This method outperforms other state-of-the-art methods on these synthetic datasets. Strengths: * This paper proposes the first spike-based motion deblurring model, which is based on transformers. * The authors claim that they can reconstruct sub-frame sharp images at any timestamp, which is a great advantage. * The datasets can be of research value to the community. * The proposed method outperforms SOTA on those synthetic datasets. Weaknesses: * The datasets use synthetic blurry images and a synthetic spike camera simulator. The method is evaluated on a real spike stream only on a few images in the supplementary materials (which is not even mentioned in the main paper). Moreover, this brings up another issue: synchronizing a real spike camera with an RGB camera is not trivial. Some hybrid camera setup is shown in the supplementary materials, but it requires manually aligned spatial and temporal outputs. More thorough evaluation is needed. * The main loss (13) is not clear. The first and the last terms seem the same. Why are they separated? * The paper contains some typos: L205 (noisy), L76/L98 (Researches), L15 (vFurthermore) * Intro: not true that deep learning models always find a clear image (L32) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Addressing any of the above weaknesses would be great, especially about the real data. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments, summary of our paper, and affirmation of the performance. We would like to address your concerns and answer your questions here. ***1. More thorough evaluation in real-world scenarios is needed.*** Thanks for your suggestion! Please refer to Response 1 in **To all reviewers**. ***2. The main loss (13) is not clear. The first and the last terms seem the same. Why are they separated?*** Thanks for pointing it out. As presented in Sections 3.3 and 3.4.2, in the first term of the loss (Equation 13), $\hat{I}_t$ represents the final deblurred output from the deblurring branch, while in the third term, $\overline{I}_t$ represents the initial deblurred result generated by the deblurring branch. The separation of these terms serves a distinct purpose. We introduce $\overline{I}_t$ through the CAMMA module as a high-resolution image domain prior to guiding the spike branch to aid spike reconstruction. By performing the loss function on $\overline{I}_t$ and ${I}_t$, we aim to minimize the blurriness present in $\overline{I}_t$, thereby providing higher-quality image-domain priors to guide the spike branch's reconstruction. The loss function between $\overline{I}_t$ and ${I}_t$ constrains the final deblurring result. We will clarify it in the revised version. ***3. The paper contains some typos: L205 (noisy), L76/L98 (Researches), L15 (vFurthermore).*** We greatly appreciate your meticulous review. We have revised these typos in the paper. ***4. Intro: not true that deep learning models always find a clear image (L32).*** We acknowledge that our statement might not be precise. We have revised the statement to emphasize the potential of deep learning models when trained on large and diverse datasets, while also considering their limitations when trained with only a single modality as input. --- Rebuttal Comment 1.1: Title: After reading the rebuttal Comment: Thank you for the rebuttal. My main concerns are mostly well addressed. I also read other reviews and responses to them. I believe they are also addressed well. Overall, this paper has no significant flaws, so I'm in favor of acceptance. --- Reply to Comment 1.1.1: Title: Thanks for your valuable time Comment: Thank you sincerely for your insightful comments, valuable suggestions, and kind appreciation of our work. Thanks a lot for your valuable time!
Summary: The paper attempts to remove motion blur from high-speed scenes making use of spike streams. Most deep learning-based deblurring algorithms predict sharp frames relying only on the input blurry frames and are not robust when the blurry artifact is severe. This work proposes to use spike streams that could be obtained from spike cameras along with the blurry input for motion deblurring. The authors propose a framework that integrates both modalities in a manner where information is shared bi-directionally. In addition, the work builds two synthetic datasets on top of the GoPro and X4K1000FPS datasets for training and evaluation. Experimental comparisons are presented with several baselines. Strengths: * The proposed problem formulation of motion deblurring using spike streams is quite new and a promising direction for further research * The joint training of RGB and gray image reconstruction is an intuitive approach to enforce the proposed network to use both modalities * The Spk-X4K1000FPS and Spk-GoPro datasets would be a useful contribution to the community if they become publicly available * The qualitative results look good Weaknesses: * The motivation of the work is not convincingly justified There are several previous works [1,2] that use event information for motion deblurring. Compared to these works, it is not clear why the paper chooses spike information over events. The explanation in L52-53 that "...spike cameras record low-resolution texture information and ... this serves as a stronger guidance for deblurring task" is very vague and not convincing. What does low-resolution texture information have to do with motion deblurring? and why should it be a stronger guidance compared to high temporal resolution? The claim in L49-51 that, "Most event-based methods unidirectionally utilize information from the event domain to assist the image domain, without achieving the complementarity of information from both domains", is not true. There are several event-based works [1] that jointly use both the RGB and event information for motion deblurring. Hence, this claim should be toned down. * The experimental results in the paper are weak and unfairly done The experimental comparison on X4K1000FPS in Table 1 is not very meaningful as it is clearly biased to the proposed method. The other baselines HiNet and NAFNet do not use extra data and hence, it is not surprising that they underperform. A more thorough and fair comparison should be done with previous event-based methods following their experiment protocol. The results for the GoPro dataset in Table 2 are simply copied (quoted) from previous papers. However, the authors follow a different experiment setting in their paper. Hence, how can the authors conclude that the performance gain is coming from the proposed approach and not necessarily from the different experimental protocols? The qualitative analysis both in the main paper and supplementary ignores to compare with the state-of-the-art approach, REFID [2]. Why is that? * More ablations are needed to justify the different design choices in the proposed approach There has to be an ablation experiment that only uses the image stream to prove the benefit of incorporating the spike stream I want to see the ablation for using the claims in L188-193 (directly fusing the input blurred image with the spike branch). Removing the whole CSFI, CAMMA, and initial deblurring branch only results in a PSNR decrease of 0.2 dB. Hence, how worse would it be if we naively fuse the blurred input with the spike branch? What is the point of doing several experiments with different input representations? How does that lead to a better understanding of the proposed approach? I get that the spike branch leads to a good performance. However, all other modules seem to be very redundant with very minor contributions to network performance. It will be good to provide the visualization of the motion magnitude mask in L208, to verify if the CAMMA module is indeed doing what it is claimed to be doing. * The writing of the paper could be improved The submission will greatly benefit from re-writing the methodology part by removing too many unnecessarily coined acronyms and by restructuring the explanation of the different modules in the proposed approach. Moreover, important details such as train-test set splits and results on real-world blurred images should be in the main paper instead of the supplementary. References [1]. Lei Sun, Christos Sakaridis, Jingyun Liang, Qi Jiang, Kailun Yang, Peng Sun, Yaozu Ye, Kaiwei Wang, and Luc Van Gool. Event-based fusion for motion deblurring with cross-modal attention, ECCV 2022 [2]. Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, Jiezhang Cao, Kai Zhang, Qi Jiang, Kaiwei Wang, and Luc Van Gool. Event-based frame interpolation with ad-hoc deblurring, CVPR 2023 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the "Weaknesses" section and address the raised concerns carefully. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***1. What does low-resolution texture information have to do with motion deblurring? Why should it be a stronger guidance compared to high temporal resolution?*** We would like to clarify that the low resolution of spike camera is due to current hardware limitation. Our emphasis is on the texture information within spike data (rather than resolution), which contributes positively to motion deblurring. To validate this assertion, We conducted a simple experiment. We use both the blurry RGB images in Spk-X4K1000FPS ($e=65$) and the grayscale images of the corresponding sharp RGB images as input to NAFNet for training. As shown in Tab.R4, we find that a grayscale image with rich texture can effectively guide the restoration of blurred RGB images. **Table.R4: Toy experiment using NAFNet.** | Input | PSNR ($e=65$) | SSIM ($e=65$) | | ---- | ---- | ---- | | Blurry RGB |29.06 |0.878 | | Blurry RGB + Gray Ground Truth |**39.84** |**0.999** | Thus, we assert that sharp texture information can effectively guide deblurring. In other words, the texture information within a grayscale image can remap color information in the blurred RGB image, even in cases of extreme blur ($e=65$). Our method explicitly reconstructs in the spike branch and strives to introduce sharper texture information into the deblurring branch to guide deblurring. Our CAMMA module aims at integrating image priors from the deblurring branch into the spike branch to enhance the spike reconstruction and subsequently provide improved guidance for the deblurring branch. Supplementary Tab.S1 demonstrates that the CAMMA enhances the quality of spike branch reconstruction. Tab.S2 illustrates that the CAMMA also improve the deblurring performance. Moreover, both temporal and texture information provides clues for reconstruction and deblurring. We apologize for the overly absolute description used in the paper. We will rectify this in the revised version. ***2. The claim in L49-51 that, "Most event-based methods unidirectionally utilize information from the event domain to assist the image domain" is not accurate.*** Please refer to Response 2 in **To all reviewers**. ***3. A more thorough and fair comparison on X4K1000FPS should be done with previous event-based methods.*** Please refer to Response 2 in **To all reviewers**. ***4. The authors follow a different experiment setting on GoPro dataset in their paper.*** The data presented in Tab.2 is primarily cited from REFID[2]. We observed that REFID used different experiment settings in their experiments compared to EFNet[1]. This discrepancy could be attributed to REFID's utilization of recurrent units and multi-frame outputs, which may lead to higher GPU memory consumption and make it hard to use larger batchsize. In our case, we have conducted additional experiments on GoPro dataset using the same training settings as EFNet, as shown in Tab.R5. **Table.R5: Additional experiments of our method on GoPro dataset using the same training settings as EFNet.** | Method |Extra Data| PSNR | SSIM | | ---- | ---- | ---- | ---- | | EFNet |Event| 35.46 |0.972 | | REFID |Event|35.91 |0.973 | | Ours (old training settings) |Spike|36.12 |0.971 | | Ours (the same training settings as EFNet) |Spike|**36.83** |**0.976** | These results confirm the performance improvements achieved by our method even under EFNet's training parameter settings. We believe that this performance gain is attributed to higher learning rates and an increased number of total iterations (In our previous experiments, our network can converge in around 100k iterations, whereas EFNet requires approximately 300k iterations). ***5. The qualitative analysis ignores to compare with the state-of-the-art approach, REFID*** We apologize for not including a comparison with REFID. Due to the lack of code before NeurIPS 2023 deadline and suitable comparative images in the original REFID paper, we couldn't compare in the inital submission. But now with open-source code, we're addressing this gap by including a qualitative REFID comparison in the revised paper. ***6. The ablation experiment that only uses the image stream to prove the benefit of incorporating the spike stream.*** We have conducted the requested ablation experiment. The results are provided in Tab.R6. This experiment validates that the introduction of spike data effectively improves performance. **Table.R6: Additional ablation experiments.** | Method | PSNR ($e=33/e=65$) | SSIM ($e=33/e=65$) | | ---- | ---- | ---- | | Only uses the image stream |32.45/28.25 |0.895/0.841 | | Our final SpkDeblurNet |**37.42/35.94** |**0.968/0.966** | ***7. The ablation for using the claims in L188-193 (directly fusing the input blurred image with the spike branch).*** We have carried out the suggested experiments, involving the direct fusion of the input blurred image with the spike branch. The results are shown in Tab.R7. We observed that comparing to removing the CAMMA branch (eliminating information flow from image to spike branch), fusing the blurred image directly provides a performance gain, highlighting the importance of bidirectional information complementarity. With CAMMA, performance further improves, demonstrating its effectiveness in transferring image information to the spike branch efficiently. **Table.R7: Additional ablation experiments.** | Method | PSNR ($e=33/e=65$) | SSIM ($e=33/e=65$) | | ---- | ---- | ---- | | Remove CAMMA branch |37.20/35.64 |0.967/0.965 | | Directly fuse the input blurred image |37.31/35.80 |0.967/0.965 | | Our final SpkDeblurNet |**37.42/35.94** |**0.968/0.966** | ***8. What is the point of doing several experiments with different input representations?*** Please refer to Response 3 in To all reviewers. ***9. The visualization of the motion magnitude mask in L208.*** Please refer to Response 4 in To all reviewers. ***10. About the writing.*** We appreciate it and will carefully revise the writing. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. I have read the rebuttal and other reviews. The following concerns remain unaddressed: 1. The authors do not directly answer the question of why low-resolution texture information (spike) should be stronger guidance compared to high temporal resolution (event) information as stated in Lines 52-53. The example presented in the rebuttal does not address this question since the problem setting is closer to a colorization task rather than a deblurring task. Given a motion blur is caused by a sudden camera or object motion, it makes sense to incorporate temporal (event) information for the deblurring task. How is texture information useful in this regard? Why should it be a stronger guidance than temporal information? 2. I understand why the authors could not compare with REFID during submission time. However, why did not the authors provide a qualitative comparison with REFID in the rebuttal PDF? 3. I am not convinced about the effectiveness of the proposed CAMMA module. According to the authors' rebuttal, a simple fusion of spike information with the input blurred image already gives a strong performance and the addition of the CAMMA module seems to outperform this naive baseline by only 0.1dB. This also casts doubt on the importance of using information bi-directionally as a simple unidirectional baseline seems to perform on par with a baseline that uses bi-directional information. --- Reply to Comment 1.1.1: Title: Thank you for your patient reply Comment: >1. The question of why low-resolution texture information (spike) should be stronger guidance compared to high temporal resolution (event) information as stated in Lines 52-53. (1) We apologize again for the overly absolute comparative term "stronger" in Line 53 and acknowledge that **both** temporal and texture information provide guidance for deblurring task, while the unique sampling mechanism makes spike cameras contain richer texture information than event cameras. We will revise the statement to emphasize that "**Both** the temporal and texture information serve as guidance for the deblurring task." (2) It's worth noting that the texture information contained in spike streams is **derived from temporal information [1]**. Specifically (as described in Sec.3.4.2), due to the proportional relationship between spike firing rate and light intensity, at a given position and time $t$, we can infer the pixel value by identifying the nearest preceding and succeeding spikes and calculating their time interval (also mentioned in Lines 52-53). Thus, spike cameras' texture information results from their dense temporal sampling, offering **more** valuable deblurring clues. (3) In the toy experiment, from the aspect of colorization, the network maps the blurred color information in the input RGB image back to the correct positions in the input sharp grayscale image, which in fact corresponds to deblurring process. So we consider the experiment as a deblurring task, which confirms that the extremely rich texture contained in the grayscale image can assist in deblurring. (Even if we regard it as a colorization task, due to the similarity between the latent texture information in spike stream and grayscale images (Fig.S4), our network indirectly achieves deblurring by coloring the potential sharp texture features based on the RGB information of the blurred image. In this perspective, the texture information in spike assists in deblurring.) To better clarify our argument, we conducted further experiments(Tab.R8) introducing both the blurred RGB image and the **LBP (Local binary patterns)** texture image of its ground-truth as inputs to NAFNet. This scenario is **distinct from colorization** and the results support the idea that diverse texture information can assist deblurring. That is to say, when possess texture information from a latent sharp image corresponding to the blurry image, the network can use it as clues to guide deblurring. The sharp edges in the texture may serve as anchors to guide deblurring, and notably, spikes inherently capture texture information. Thus it also makes sense to incorporate texture information for the deblurring task. We will further explore this in our future work. **Table.R8: More toy experiments using NAFNet.** | Input | PSNR ($e=65$) | SSIM ($e=65$) | | ---- | ---- | ---- | | Blurry RGB |29.06 |0.878 | | Blurry RGB + LBP of Ground Truth |35.90 |0.989 | | Blurry RGB + Gray Ground Truth |**39.84** |**0.999** | >2. Why didn't provide a qualitative comparison with REFID in the rebuttal PDF? We've evaluated and added the qualitative comparison with REFID in the revised version. However, due to the visual differences is minor, and space is constrained in the rebuttal PDF, we regret not including them. We promise to adding these visual results in the final version of the paper. >3. The effectiveness of the proposed CAMMA branch (1) The **CAMMA branch** we propose includes **two** key ideas: firstly, the introduction of information flow from the image domain to the spike domain, which leverages the rich texture characteristics of spike data, is distinct from event-based methods; secondly, the **CAMMA module** itself emphasizes sharp regions to further enhance performance. The combination of the two results in performance improvements from 0.2 to 0.4 dB across various input representations (Tab.3, Supp.Tab.S2). We gently believe that evaluating these ideas separately (Tab.R7) and considering each part has only a limited improvement would be not suitable, and we kindly hope that assessing these ideas collectively, recognizing the incremental gains from considering each component individually, could be a more appropriate approach. The mask visualization in Fig.R3 further validates the CAMMA module's efficacy. Tab.S1 also shows a performance gain of 0.4 to 3.6 dB for the spike reconstruction branch with the CAMMA branch across diverse input representations. (2) We observed that experiments directly fusing the blurred input tend to be smoother, while those using the CAMMA module retain more details in the visual results. We will include the relevant visual results in the future revised version. We plan to explore more concise spike-assisted deblurring approaches in the future. [1] Zhu L, Dong S, Huang T, et al. A retina-inspired sampling method for visual texture reconstruction[C]//2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019: 1432-1437.
Summary: This paper proposes a motion deblurring method that integrates RGB images and binary spike streams. In detail, it has a content-aware motion magnitude attention module and a transposed cross-modal attention fusion module. Experiments demonstrate state-of-the-art performance on deblurring datasets. Strengths: 1, the first spike-based motion deblurring model. 2, the two large-scale synthesized datasets for spike-based motion deblurring 3, state-of-the-art results. Weaknesses: 1, The Transposed Cross-Attention Fusion (TCAF) is close to the multi-Dconv head transposed attention (MDTA) in Restormer. 2, The TCAF module is similar to EICA in EFNet. 3, The spike camera is similar to event camera, and the method based on these two methods are also similar except the format of event/spike stream. I suspect that event-based methods also works with spike camera. Because the paper mainly forcuses on the algorithm design, and the proposed SpkDeblurNet makes no huge differences with the existing EFNet, from this aspect the novelty of the paper is limited. 4, In the abstract, "preserving rich spatial details" and "are low-resolution image signals" are conflicting and confusing. 5, Typo in abstract. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your summary, and we appreciate you for pointing out the strengths of our paper. My clarification and answers for the weaknesses you summarized are as follows. ***1. The Transposed Cross-Attention Fusion (TCAF) is close to the multi-Dconv head transposed attention (MDTA) in Restormer, and the TCAF module is similar to EICA in EFNet.*** Thanks for pointing it out. As described in Section 3.4.3, unlike the original usage of MDTA solely for self-attention computation, in TCAF, we have adapted and modified the MDTA module proposed in Restormer to address two specific challenges in our multi-modal fusion: 1) The lightweight nature of the spike branch with fewer channels compared to the deblurring branch makes conventional spatial attention computation not directly applicable. 2) The computational cost of traditional attention mechanisms grows quadratically with input size, which is impractical for image restoration tasks. Furthermore, as an effective module, the MDTA module has been drawn upon by multiple studies[1,2]. In contrast to EFNet's EICA module, our TCAF differs in that the two modalities within our module possess differing channel counts, with the spike branch having fewer channels. We also employ depth-wise convolutions to enhance local context before computing the attention map, aiming to improve generalization. ***2. The spike camera is similar to the event camera, and the method based on these two methods are also similar except for the format of the event/spike stream. I suspect that event-based methods also work with spike camera.*** Please refer to Response 2 in **To all reviewers**. ***3. In the abstract, "preserving rich spatial details" and "are low-resolution image signals" are conflicting and confusing.*** We would like to clarify that the phrase "preserving rich spatial details" refers to the spike cameras' ability, **in contrast to event cameras** that primarily capture temporal information, to capture per-pixel spatial information alongside temporal data. On the other hand, "are low-resolution image signals" reflects the current state of spike cameras, where the sensor array is limited to $250\times 400$ photosensitive units, resulting in low-resolution signals **compared to high-resolution RGB images**. We will revise the wording in the revised version to mitigate any potential misunderstandings. ***4. Typo in the abstract.*** Thank you for your careful observation. We have revised these typos in the paper. [1] Sun L, Sakaridis C, Liang J, et al. Event-based fusion for motion deblurring with cross-modal attention[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 412-428. [2] Song J, Mou C, Wang S, et al. Optimization-Inspired Cross-Attention Transformer for Compressive Sensing[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 6174-6184. [3] Sun L, Sakaridis C, Liang J, et al. Event-Based Frame Interpolation with Ad-hoc Deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 18043-18052. --- Rebuttal Comment 1.1: Title: Thanks for your valuable time and hope our responses helpful for your re-assessment of our work Comment: Thank you for the thorough feedback and constructive suggestions. We would like to kindly inquire whether our previous response clarified your concerns and if there are any additional comments. We are glad to cooperate and provide answers to assist in the review process. Thank you very much for your time! --- Reply to Comment 1.1.1: Title: Restatement of our responses that we hope can help for your re-assessment Comment: Dear Reviewer wVmq, Thank you sincerely for your detailed feedbacks. We notice that in your initial review, your concerns lie in three parts. 1. The first concern is the difference of our TCAF module from the previous works, for which we have clarified the differences from existing modules and our motivation for the adaptation. 2. The second concern is whether event-based methods can work with spike camera, for which we clarified the intrinsic distinctions between the cameras and methods, and we provided a comparison to show our approach's superiority with spike data. 3. The third concern is about the confusing phrase, for which we clarifed the exact meaning we want to express and promise to revise the wording in the revised version. Based on these facts and positive feedback from other reviewers, we sincerely inquire that if our previous response could settle and answer your concerns and kindly hope you could re-consider your initial rating. If you still have any further comments or questions, please let us know and we are glad to address your further concerns. --- Rebuttal Comment 1.2: Comment: Thanks for your reply. My concerns are largely addressed, so I update my score to borderline accept. Although the reply 1 is still arguable, this paper may be useful for the community. Please update the comparison and new results to the paper.
Summary: The paper proposes a novel approach that integrates the two modalities from two branches, leveraging spike streams as auxiliary visual cues for guiding deblurring in high-speed motion scenes, introducing a content-aware motion magnitude attention module and transposed cross-modal attention fusion module. Strengths: 1. Authors propose the first spike-based motion deblurring model equipped with content-aware motion magnitude attention and a cross-modal transposed attention fusion module. 2. Extensive experiments have demonstrated the effectiveness of the proposed model 3. Overall, the paper is well-written and technically sound. Weaknesses: 1. Although the authors conducted the validation on two synthetic datasets that the proposed method is effective, the performance on real scenarios is still unknown. Especially, it is hard to simultaneously obtain aligned pairs of spike streams and RGB images. So, have authors ever considered constructing a camera system to truly capture the spike data and blur images to better demonstrate the proposed setup is feasible and has practical value ? 2. Considering the extra cost of spike cameras, is it really worth using such an expensive device for deblurring rather than using a high-speed global shutter camera directly capture relatively sharp images? Authors should provide more motivations for using spike cameras from the perspective of reality. 3. What about the generalization ability of the proposed method? Authors should present related experiments. Moreover, the computational complexity and inference time are also supposed to be compared. 4. Some minor issues: typos (e.g. line 15) Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. I encourage the authors to actively and rigorously prepare the rebuttal and I will raise the rating if my concerns are well-addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and constructive feedback. We are encouraged that you find our method effective. We would like to address your concerns and answer your questions here. ***1. Have authors ever considered constructing a camera system to truly capture the spike data and blur images to better demonstrate the proposed setup is feasible and has practical value ?*** Thanks for your suggestion. Please refer to Response 1 in **To all reviewers**. ***2. Authors should provide more motivations for using spike cameras from the perspective of reality.*** We would like to clarify that the spike camera is built using the same CMOS sensors as traditional cameras, and it leverages consumer-grade integrated circuits through regular semiconductor manufacturing processes, making it a cost-effective solution [1]. In contrast, conventional high-speed cameras, such as Phantom cameras, necessitate specialized sensors and shutters that are considerably expensive. Specifically, the spike camera employs an innovative temporal domain sampling method with continuous photon capture by photosensitive units, as opposed to synchronous exposure with a fixed exposure time. When accumulated intensity surpasses a predefined threshold, a spike is generated. These spikes, generated by each photosensitive unit, are spatially organized to form a spike stream array. This sampling technique enables achieving exceptionally high sampling rates on conventional CMOS sensors. Considering the low cost of the spike camera and its advantages of high speed and high dynamic imaging and recording rich spatial texture information, we believe that the spike camera holds promising prospects for various applications. ***3. What about the generalization ability of the proposed method? Moreover, the computational complexity and inference time are also supposed to be compared.}*** We showcase visual results in Supplementary Fig. S2 by applying our model trained on the Spk-X4K1000FPS dataset with a window size of 33 to real-world sequences. These results demonstrate the generalization ability of our method to real data. Additionally, we conduct experiments to test models trained on Spk-X4K1000FPS ($e=33$) on data with $e=65$, as well as cross-dataset testing by applying models trained on Spk-X4K1000FPS to the GoPro dataset. The visualized results are provided in the PDF file of the global response. All these results demonstrate the robust generalization of our method across diverse blurring conditions and scenes. We have added comparisons regarding computational complexity and inference time in Tab.R3 (note that the settings for e=33 and e=65 are only applicable to the Spk-X4K1000FPS dataset). Our approach achieves a favorable balance between complexity and performance, as evident from our results. **Table.R3: Comparisons between different methods regarding computational complexity and inference time.** | Method | MACs | Params | Inference Time | PSNR (on GoPro) | | ---- | ---- | ---- |---- |---- | | HINet |170.49G |88.67M |20.2ms |33.69| | NAFNet |63.06G|67.79M |27.8ms |33.69| | EFNet |107.93G |7.73M |14.9ms |35.46| | REFID |4.36T |88.81M |781.2ms |35.91| | Ours ($e=65$) |53.25G |12.93M |140.6ms |N/A| | Ours ($e=33$) |53.18G |12.92M |107.9ms |N/A| | Ours ($e=56$) |53.23G |12.93M |130.1ms |36.12| ***4. Some minor issues: typos (e.g. line 15)}*** We appreciate your keen attention to detail. The mentioned typos have been rectified in the revised version of the paper. [1] Huang T, Zheng Y, Yu Z, et al. 1000× faster camera and machine vision with ordinary devices[J]. Engineering, 2022. --- Rebuttal Comment 1.1: Title: Thanks for your feedbacks and we are wondering whether we have addressed your concerns Comment: Thank you for the detailed feedbacks and constructive suggestions. We sincerely hope our posted response can help to address your concerns on our paper and serve as a reference for your re-assessment of our work. If you have any further comments and questions, please let us know and we are glad to write a follow-up response. Thank you very much! --- Rebuttal Comment 1.2: Comment: Thanks for the response. After reading the authors' rebuttals and other reviewers' comments, I will hold my ratting. --- Reply to Comment 1.2.1: Title: Thanks for your precious time and we would like to see if there is any further concern and comment Comment: Dear Reviewer DRAW, Thank you sincerely for your insightful feedbacks. We notice that in your initial review, your concerns lie in three parts. 1. The first concern is about the **real-world camera system**, for which we have provided detailed information about the prototype of our hybrid camera system in Supplementary Sec.D.1 and Fig.S1. Deblurred results can be found in Fig.S2 and Fig.R4. 2. The second concern is the **motivations** for using spike cameras, for which we explain that the spike camera is built with consumer-grade CMOS sensors and integrated circuits through regular semiconductor manufacturing processes, making it a cost-effective solution for various applications. 3. The third concern is about the **generalization ability and the computational complexity**, for which we provided experiments of the favorable generalization ability to real data, diverse blurring conditions and datasets, and we added comparisons about computational complexity. Given these facts and positive feedbacks from other reviewers, we would like to kindly ask again if our previous response clarifies your concerns and if this could potentially serve as a basis for improving your initial rating. Also, if you have any further questions or comments, please let us know, and we are glad to give further responses and clarification.
Rebuttal 1: Rebuttal: We appreciate all reviewers for their helpful feedbacks. We are encouraged that they found our paper "well-written and technically sound" [DRAW], our method "novel" [Rugk] and "intuitive" [9w9w], and the proposed problem "new" [9w9w] and "is a promising direction for further research". We are delighted that they acknowledge that our method outperforms state-of-the-art methods [wVmq,kDLr] and the qualitative results are good [9w9w]. We are glad that they recognize our dataset [wVmq,9w9w,kDLr] and evaluate it as "a useful contribution"[9w9w] with "research value"[kDLr]. We will incorporate all feedback. In this general response, we would like to address some cruical concenrns. ***1. More evaluation in real-world scenarios.*** > **[Reviewer DRAW]** Have authors ever considered constructing a camera system to truly capture the spike data and blur images to better demonstrate the proposed setup is feasible and has practical value? Yes, actually we have provided relevant information about the prototype of our hybrid camera system in Supplementary Sec.D.1 and Fig.S1. We used this system to capture pairs of blurred images and spike sequences from real-world scenes and applied the proposed method to obtain deblurred results, as shown in Supplementary Fig.S2. > **[Reviewer kDLr]** More thorough evaluation in real-world scenario is needed. We achieve temporal synchronization by simultaneously triggering the capture programs of both cameras through a script. We use a feature point matching algorithm to achieve spatial registration. We are currently in the process of improving the hybrid camera system prototype and extending our evaluation to include a broader range of real-world scenarios. We have added additional results in real-world scenes in Fig.R4 in the PDF in the global response. ***2. More comparative experiments that applying event-based methods to spike camera.*** > **[Reviewer wVmq]** The spike camera is similar to event camera, and the method based on these two methods are also similar except the format of event/spike stream. I suspect that event-based methods also works with spike camera. > **[Reviewer 9w9w]** A more thorough and fair comparison on X4K1000FPS should be done with previous event-based methods. While both camera types are neuromorphic cameras, event cameras utilize differential sampling, while spike cameras employ integral sampling. This fundamental difference leads to intrinsic distinctions. Both capture rich temporal information, yet spike cameras uniquely excel in capturing detailed spatial-temporal textures. This trait motivates our incorporation of image-domain priors into spike domains for explicit reconstruction. These refined spike features enhance the image branch for guiding deblurring. The mutual complementation of information between modalities enhances performance. In contrast, event camera-based methods often involve one-way information flow from the event branch to the image branch, highlighting a significant difference from our approach. Despite these disparities, both approaches fall under the multi-modal paradigm. Hence, we conducted supplemental experiments using EFNet and REFID networks for spike-assisted deblurring. For EFNet, we employed input representations similar to its SCER approach, excluding the EMGC module due to its event-related nature. Regarding REFID, we utilized similar spike voxel representations. Tab.R1 results demonstrate our approach's superiority with spike data, yielding enhanced outcomes. **Table.R1: Comparisons between different networks on Spk-X4K1000FPS dataset.** | Method | PSNR ($e=33/e=65$) | SSIM ($e=33/e=65$) | | ---- | ---- | ---- | | EFNet |36.36/33.53 |0.960/0.937 | | REFID |36.30/33.47 |0.962/0.945 | | SpkDeblurNet (Ours) |**37.42/35.94** |**0.968/0.966** | >**[Reviewer 9w9w]** The claim in L49-51 that, "Most event-based methods unidirectionally utilize information ... without achieving the complementarity of information" is not accurate. Our phrase "achieving the complementarity of information from both domains" signifies bidirectional information flow between modal branches, enhancing each other's tasks. EFNet, in contrast, unidirectionally incorporates event data into the image branch for RGB deblurring, without reciprocating by including image data into the event branch. Our approach facilitates mutual bidirectional information exchange between branches, fostering mutual assistance. We will include a clearer elaboration in the revised version. ***3.[Reviewer 9w9w] What is the point of doing several experiments with different input representations?*** Apart from validating enhanced deblurring performance with diverse input representations via the spike branch, our motivation for exploring different input representations also involves evaluating the effectiveness of our CAMMA module. Supplementary Tab.S1 shows CAMMA's enhancement of spike branch reconstruction quality across various inputs. Tab.S2 shows CAMMA's contribution to improved overall deblurring performance. These experiments present CAMMA's potential to enhance performance across diverse inputs. ***4. [Reviewer 9w9w] The visualization of the motion magnitude mask in L208.*** We've added visualizations of the motion magnitude mask in our global response PDF. In Fig.R3, we display the hard thresholding mask and the content-aware mask for both spike stream and CSFI input representations. The hard thresholding mask roughly masks high motion magnitude regions, while the content-aware mask offers refined masking. Notably, the CSFI representation yields a more pronounced content-aware mask due to its reduced information content compared to the spike stream, requiring greater reliance on image-domain prior knowledge. Supplementary Tab.S1 and Tab.S2 also demonstrate the significant improvement brought about by the introduction of CAMMA module when utilizing the CSFI representation. Pdf: /pdf/71844893b457b52d12d0b9b5c9c0a0126c8a87bb.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a novel approach that combines traditional cameras and spike cameras to address motion blur in high-speed scenes. By leveraging spike streams as auxiliary visual cues, the proposed spike-based motion deblurring model effectively extracts relevant information from blurry images using content-aware motion magnitude attention and transposed cross-modal attention fusion modules. Strengths: The paper proposes to novel technique to deblur a blurry scene using a traditional camera frame along with spike camera frames. The proposed method is able to leverage the high SNR information from the traditional camera frame and the motion information from the spike camera frames to reconstruct the final image with high quality. Weaknesses: The authors have missed a major branch of related works using Quanta Image Sensors (QIS). QIS are a very similar family of image sensors that operate at high speed and produce single-bit frames, and has a lot of previous works on deblurring and denoising. For eg., [1] Ma, S., Gupta, S., Ulku, A.C., Bruschini, C., Charbon, E. and Gupta, M., 2020. Quanta burst photography. ACM Transactions on Graphics (TOG), 39(4), pp.79-1. [2] Chi, Y., Gnanasambandam, A., Koltun, V. and Chan, S.H., 2020. Dynamic low-light imaging with quanta image sensors. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16 (pp. 122-138). Springer International Publishing. [3] Chandramouli, P., Burri, S., Bruschini, C., Charbon, E. and Kolb, A., 2019, May. A bit too much? High speed imaging from sparse photon counts. In 2019 IEEE International Conference on Computational Photography (ICCP) (pp. 1-9). IEEE. In fact, "[4] Liu, Y., Gutierrez-Barragan, F., Ingle, A., Gupta, M. and Velten, A., 2022. Single-photon camera guided extreme dynamic range imaging. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1575-1585). " uses a very similar idea of combining two imaging modalities, but for a different task - HDR imaging. EDIT: With the promised change, my major concern will be addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The related works and the comparison section should ideally have a detailed comparison with the QIS based methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***1. The authors have missed a major branch of related works using Quanta Image Sensors (QIS). The related works and the comparison section should ideally have a detailed comparison with the QIS-based methods.}*** Thanks for recommending these works. We appreciate the recognition of Quanta Image Sensors' (QIS) ability to capture high-speed motion and perform well across various tasks. In the revised paper, we have thoroughly reviewed and incorporated the following references[1-4] into our introduction, related work, and comparative experiments. We will incorporate the discussion on these works in our final version, where the one additional page allows us to extend the current Sections with more content and illustrations. Furthermore, we would like to gently emphasize that while both the spike camera and QIS are capable of capturing high-speed scenes, there exist significant differences between the two technologies. QIS primarily relies on the single-photon avalanche diode (SPAD) detector technology, whereas the spike camera is constructed from CMOS sensors similar to traditional cameras and utilizes regular semiconductor manufacturing processes, resulting in a more cost-effective solution. Exploiting the fact that the time sensitivity of CMOS photosensitive devices widely used today has reached tens of nanoseconds, the spike camera achieves exceptional temporal resolution by mimicking the sampling mechanism of the primate fovea. Additionally, we would like to clarify that the works of [1-3] focus on reconstruction tasks specific to QIS and involve single-modal data only. Moreover, while both our method and the QIS-assisted HDR imaging method[4] incorporate two modalities, this does not imply a similarity between our work and [4]. Firstly, multi-sensor modality fusion is a common task in the field of autonomous driving. Secondly, [4] simply concatenates the features of two modalities during skip connections for fusion. In contrast, our method takes a bidirectional complementary approach to leverage information from both modalities. We introduce the initial deblurred output from the image branch as a high-resolution image domain prior to guiding the spike branch to aid spike reconstruction, and then we introduce the refined spike features into the image branch to guide deblurring in the image domain. Additionally, we propose the TCAF module for the effective fusion of the two branches. Since our method and [4] are two different tasks and cannot be directly compared, to validate the effectiveness of our proposed framework, we adapted the network of [4] for our spike-assisted deblurring task under their experiment protocol. Experimental results in Tab.R0 indicate that our framework achieves superior performance over [4] and can better utilize the complementarity between different modalities to improve performance. **Table.R2: Comparisons between different networks on Spk-X4K1000FPS dataset.** | Method | PSNR ($e=33/e=65$) | SSIM ($e=33/e=65$) | | ---- | ---- | ---- | | CMOS-SPC [4] |33.65/31.59 |0.926/0.912 | | SpkDeblurNet (Ours) |**37.42/35.94** |**0.968/0.966** | [1] Ma, S., Gupta, S., Ulku, A.C., Bruschini, C., Charbon, E. and Gupta, M., 2020. Quanta burst photography. ACM Transactions on Graphics (TOG), 39(4), pp.79-1. [2] Chi, Y., Gnanasambandam, A., Koltun, V. and Chan, S.H., 2020. Dynamic low-light imaging with quanta image sensors. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16 (pp. 122-138). Springer International Publishing. [3] Chandramouli, P., Burri, S., Bruschini, C., Charbon, E. and Kolb, A., 2019, May. A bit too much? High speed imaging from sparse photon counts. In 2019 IEEE International Conference on Computational Photography (ICCP) (pp. 1-9). IEEE. [4] Liu, Y., Gutierrez-Barragan, F., Ingle, A., Gupta, M. and Velten, A., 2022. Single-photon camera guided extreme dynamic range imaging. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1575-1585). --- Rebuttal Comment 1.1: Comment: Thank you for your reply. With the promised change, my major concern will be addressed. --- Reply to Comment 1.1.1: Title: Thanks for your valuable time Comment: Thank you very much for the thorough review and for increasing the score. Your time and input mean a lot to us.
null
null
null
null
null
null
Explainable Brain Age Prediction using coVariance Neural Networks
Accept (poster)
Summary: This paper proposes a new framework for predicting brain age, an area of increasing interest in computational neuroscience. The authors use coVariance Neural Networks (VNNs) to develop an anatomically interpretable method that relies on cortical thickness features. Their framework goes beyond existing metrics for the brain age gap in Alzheimer's disease (AD), revealing that VNNs can attribute anatomical interpretability to the brain age gap by identifying significant brain regions. It also demonstrates that this interpretability hinges on the VNNs' ability to leverage specific eigenvectors of the anatomical covariance matrix, offering an explainable approach to brain age prediction. Strengths: 1. The paper addresses an important problem in Alzheimer's disease research by focusing on predicting the brain age gap. This is a critical aspect to understand and model, as it can potentially indicate an accelerated aging process due to adverse health conditions. 2. The proposed framework based on coVariance Neural Networks (VNNs) is a notable strength of the paper. Not only does it present a novel approach to brain age prediction, but it also offers improved interpretability, which is often lacking in complex neural network models. 3. The use of cortical thickness features for brain age prediction is well justified in the context of neuroscience. This choice is reasonable and lends biological plausibility to the models, likely improving their effectiveness and relevance to the task. Weaknesses: 1. The learning rate selection for the Adam optimizer at 0.059 appears excessively specific and could be a sign of overfitting the hyperparameters to the data. The authors should provide an analysis or justification for this choice or consider testing a range of learning rates to demonstrate the robustness of the model to this parameter. 2. The paper's reliance on a single dataset with limited sample size is a notable weakness. Such a setting may limit the generalizability of the model and its results. The authors could improve this aspect by testing the model on additional datasets or considering methods to augment or diversify the existing data. 3. The lack of comparison with simpler or classic baseline models, such as linear regression, is another shortcoming. Such comparisons are necessary to demonstrate the model's superior performance and justify the additional complexity of the proposed approach. Without these comparisons, it is difficult to assess the true contribution of the paper's proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can you provide provide an analysis or justification for this choice of learning rate or consider testing a range of learning rates to demonstrate the robustness of the model to this parameter? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty and key strengths of our work. We address the concerns raised in the review below. **Additional empirical evidence.** To address the concern about empirical evidence, we have added results on the ADNI1 dataset (see pdf file attached with the global response). These results were derived from the models trained on OASIS-3 dataset and are highly consistent with the results reported in the paper. **Choice of learning rate.** The learning rate and other aspects of VNN architecture were chosen according to a hyperparameter search procedure on the training set using the package Optuna. The meaningful comparison with the studies in the existing literature could be categorized into the two following categories. **Comparison with traditional statistical models.** Since the chronological age prediction task using cortical thickness is a multivariate regression problem with correlated input features, a PCA-based regression model is the most appropriate traditional model for comparison with VNNs. However, this method obfuscates the anatomical interpretability of individual anatomic features as principal components could be linked with a combination of anatomical regions without any further insight. Importantly, PCA-based regression can be prone to instability due to small perturbations in the principal components and, hence, non-reproducible on settings with similar but perturbed principal components. Unlike PCA-based regression model, *VNNs offer theoretical stability guarantees on their performance that have been demonstrated empirically (see [a] and Appendix K in supplementary file)*. Other relevant approaches such as elastic-net regression require excessive fine-tuning of regularization parameters and could be overfit on the dataset characteristics (for e.g., the data processing pipeline used to extract cortical thickness features). Our results have demonstrated robustness to such aspects (for instance, VNNs trained on XYZ dataset processed according to ANTsCT pipeline and of dimensionality 100 could extract similar patterns of interpretability on OASIS-3 dataset that was processed according to Freesurfer; see Fig. 13 in the supplementary material). Furthermore, traditional statistical models operate within the dimensionality of a given dataset. In contrast, *VNNs are scale-free* and can process a dataset of arbitrary dimensionality. Hence, it is feasible for us to cross-validate the inference over datasets that may have different number of features. This is a relevant feature in neuroimaging and was utilized by us to demonstrate that anatomic interpretability results on OASIS-3 (148-dimensional) could be derived using VNN models that had been trained on XYZ dataset (100-dimensional). See Fig. 13 in supplementary file for details. We have further used this property of VNNs to cross-validate anatomical interpretability observed in OASIS-3 on an independent ADNI1 dataset (see **Fig. S2** in the attached pdf file with global response), where we observed consistent anatomical patterns for interpretability across datasets of dimensions 100, 200, and 400. **Comparison with deep learning methods with post-hoc explainability.** Lack of explainability is a well-recognized drawback of deep learning models. To address this, limited studies have utilized state-of-the-art post-hoc, model-agnostic methods such as SHAP or LIME [b] and saliency maps [c] to add explainability to their brain age estimation approaches, identifying the input features most relevant to the inference outcome. However, explainability offered by such post-hoc approaches may be unstable to small perturbations to the input, inconsistent to variations in training algorithms and model multiplicity (i.e., when multiple models with similar performance may exist but offer distinct explanations), and computationally expensive [d,e,f]. In this context, VNNs provide a transparent learning model that is inherently interpretable and can associate elevated brain age with brain regions characteristic of a disease or health condition as well as the principal components of the covariance matrix, with no significant added computational cost. **We will add the above discussion regarding comparison with existing methods in the literature to the literature review.** [a] Sihag, et. al, coVariance neural networks. In Proc. Conference on Neural Information Processing Systems, Nov. 2022. [b] A. Lombardi, et. al, “Explainable deep learning for personalized age prediction with brain morphology,” Frontiers in neuroscience, vol. 15, p. 578, 2021. [c] C. Yin, et. al, “Anatomically interpretable deep learning of brain age captures domain-specific cognitive impairment,” Proc. the National Academy of Sciences, vol. 120, no. 2, p.e2214634120, 2023 [d] A. K. Dombrowski, et. al, “Explanations can be manipulated and geometry is to blame,” Adv. in Neural Inf.Proc. systems, vol. 32, 2019. [e] J. Adebayo, et. al, “Sanity checks for saliency maps,” Adv. in neural Inf.Proc. systems, vol. 31, 2018. [f] E. Black, et. al, “Model multiplicity: Opportunities, concerns, and solutions,” in Proc. the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 850–863 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I think a table/figure to show the test performance change when varying the learning rate helps understand the method. --- Reply to Comment 1.1.1: Title: Varying learning rate Comment: We thank the reviewer for their suggestion. We provide the following table to address reviewer's concern. Specifically, we investigated the results of chronological age prediction for the test set in HC group and brain age prediction in AD+ group in OASIS-3 dataset when the VNNs were trained with varying learning rates in the range [0.03, 0.2]. Similar to the results reported in the paper, we trained 100 VNN models for different permutations of the training and validation set and leveraged them to report the mean and standard deviation of the results for test performance as well as $\Delta$-Age for AD+ group. The results for learning rate 0.059 are also included to facilitate comparison. | Learning rate | Test set (MAE) | Test set (correlation) | $\Delta$-Age (AD+ group)| |_____ 0.03 ____ |_ 5.93$\pm$ 0.103_ |_____0.53$\pm$ 0.011 ___ | __ 3.33$\pm$ 4.22 ______ | |_____ 0.045 ___ |_ 5.96$\pm$ 0.19__ |_____0.53$\pm$ 0.019 ___ | __ 3.37$\pm$ 4.23 ______ | |_____ **0.059** ____ |_ **5.82$\pm$ 0.13**_ |_____**0.51$\pm$ 0.078** ___ | __ **3.54$\pm$ 4.49** ______ | |_____ 0.075 ____ |_ 5.98$\pm$ 0.24__ |_____0.54$\pm$ 0.018 ___ | __ 3.62$\pm$ 4.43 ______ | |_____ 0.10 _____ |_ 6.04$\pm$ 0.302_ |_____0.53$\pm$ 0.062 ___ | __ 3.58$\pm$ 4.31 ______ | |_____ 0.20 ____ |__ 6.26$\pm$ 0.407_ |_____0.54$\pm$ 0.08 ____ | __ 3.72$\pm$ 4.49 ______ | The results above demonstrate that the test performance diminishes slightly in terms of MAE when the learning rate is smaller than 0.059. With learning rates 0.1 or 0.2, we observed limited but relatively more significant degradation in MAE (coupled with increased variance across 100 VNN models). However, for all scenarios, the mean of the Pearson's correlation between the VNN outputs and ground truth was consistently above 0.5. Also, for all scenarios, the $\Delta$-Age estimated by VNNs for AD+ group was consistently elevated, as has been reported in the paper. Hence, **the above results demonstrate that the findings reported in the paper were robust to choice of learning rate within a reasonable range**. We are happy to address any further concerns the reviewer may have.
Summary: The authors propose a framework for predicting brain age by developing coVariance Neural Networks (VNN). They leverage the stability properties of VNN to first train the network using data from healthy controls to predict chronological age. Then, they perform inference using a combined dataset that includes groups with Alzheimer's disease (AD+), using the covariance matrix of the data. On the regional residuals, they conduct statistical analysis to identify distinct brain regions affected by the disease and demonstrate a strong correlation between the residuals and specific eigenvectors, suggesting that the framework enables anatomical interpretability. By utilizing a simple linear model to correct for age bias, the authors obtain the brain age after predicting the chronological age. Furthermore, the authors provide additional discussions on eigenvector(s) and their implications. Strengths: - This paper is well-organized and effectively motivated, providing a clear and comprehensive explanation. The suitability of VNN for the target task is convincingly demonstrated. - The framework presented in this paper takes advantage of the covariance structure, which is further supported by eigenvector studies. This approach allows for a reduction in the number of learnable parameters, making it particularly suitable for the medical domain where sample sizes are often limited. Weaknesses: - This paper lacks original contribution, as it primarily builds upon existing studies with some additional adaptations and explanations for the specific task. - The rationale for training 100 VNNs is not explained properly. It is unclear whether this large number is necessary to overcome any deficiencies or solely for the benefits of ensemble learning. Furthermore, the inferior performance compared to other approaches when using the additional ensemble method should be properly addressed. - Relatively large residuals in specific regions may indicate the impact of a particular disease, and this is perhaps the reason why the authors used ANOVA for detailed diagnosis. However, while higher correlation between residuals and specific eigenvectors may demonstrate validity, it does not necessarily imply interpretability. - Discussions on scalar parameters (filter taps) would be beneficial as they provide how much to utilize specific-sized neighborhoods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Does replacing CHC with CHA result in significantly improved outcomes? - Is it still feasible to do so even when there are a large number of input variables? Is there a minimum F value that corresponds to the number of variables? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors adequately discuss their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the appropriateness of VNNs to the brain age prediction task and broadly to data analysis in medical domain. The concerns in the review are addressed below. ### Contributions and novelty. The motivation to study VNNs for brain age prediction relative to other studies in brain age prediction hinges on the following aspect. **Fallacies of focusing on performance.** Most existing brain age prediction studies use the performance on the chronological age prediction as a prominent metric to assess the quality of brain age. However, the performance on chronological age prediction is a flawed metric for assessing the quality of brain age estimate. Particularly, in the absence of explainability, the performance solely cannot provide clarity on the following aspects: - Does better performance on predicting chronological age correlate with a more *useful* estimate of brain age? - Are all models that achieve a specific mean prediction error (say 1 year) on the chronological age prediction task *equivalent* in terms of their ability to predict a meaningful brain age in adverse health conditions? A recent study of several existing brain age prediction frameworks has revealed that the accuracy achieved on the chronological age prediction task may not correlate with their clinical utility (see [a]). Further, the age-bias correction step in brain age evaluation procedure accounts for any inaccuracy in the chronological age prediction (whether the Pearson’s correlation between estimated chronological age and ground truth is 0.9 or 0.5). *These observations suggest that explainability must be the key metric to assess the biological plausibility of a brain age algorithm, irrespective of its performance on the chronological age prediction task.* The interpretability offered by VNNs facilitates a convincing evaluation of the biological plausibility of brain age estimates and has not been explored before. **Conceptual contributions.** Furthermore, *VNNs provide methodological clarity to all aspects of brain age prediction*. Specifically, VNNs learning 'something significant' from the chronological age data as regression models is a necessary first step for attaining robust anatomical interpretability (Appendix H.2 in supplementary file provides more evidence in this context). This interpretability hinges on exploitation of specific eigenvectors of the anatomical covariance matrix. The VNNs learn to exploit these eigenvectors when trained on the task of chronological age prediction. Further, the age-bias correction step is limited to projecting the VNN output onto a space where brain age can be compared meaningfully to chronological age. Due to word limit here, we also refer the reviewer to comparison with existing approaches in the response to Reviewer mFGa. **[a]** Jirsaraie, et. al, A systematic review of multimodal brain age studies: Uncovering a divergence between model accuracy and utility. Patterns, 2023. **We will incorporate the above discussion to better communicate the contributions of VNNs to brain age prediction task.** ### Training on 100 VNN models. We report the performance derived from *100 VNN models to demonstrate the high confidence in our findings*, i.e., our findings were not a product of only one model. Existing literature shows that it is possible to have different explainability for deep learning models with similar performance. Such an observation can reduce the confidence in explainability offered by only one model. In this context, our findings in Fig. 2a were highly consistent across 100 VNN models, thus, suggesting that the results were not overfit on a specific training set. ### Eigenvectors and interpretability. First, we refer the reviewer to Fig. 9 in the supplementary file, which plots three eigenvectors with the largest associations with regional residuals on a brain surface. The comparison of anatomical interpretability in Fig. 2a and eigenvector plots in Fig. 9 suggests that the VNN’s ability to exploit these eigenvectors was instrumental to recovering the results in Fig. 2a. This observation is indeed verified by our results in Fig. 13 in supplementary file where randomly initialized VNNs are unable to provide robust patterns of anatomic interpretability. Hence, the ability of VNNs to exploit these eigenvectors is instrumental to the observed anatomical interpretability. ### Filter taps in VNNs. Since the first layer consisted of 5 filter taps and the second layer consisted of 10 filter taps, the overall neighborhood size is 13 (as the first filter tap in each layer is not tied to the convolution operation). ### Number of input variables and anatomical interpretability. The number of input variables can vary widely across neuroimaging datasets. Currently, we assess the significance using p-values in ANOVA obtained after Bonferroni correction which provides consistent results for datasets with 148, 100, 200, and 400 number of features. However, datasets with a larger number of features also provide more localized results as compared to datasets with smaller number of features. A minimum F-value corresponding to number of features may provide a more accurate mechanism to assess the significance of our results. We will discuss this aspect in the limitations of our analysis and potential future work. ### Choice of covariance matrix. There was no statistically significant difference in brain age gaps derived for the AD+ group for the two choices of covariance matrices (results for ${\bf C}_H$ are provided in Appendix L). In terms of interpretability, we observed reduced significance for certain regions when ${\bf C}_H$ was used. Since there are no current ground truths or benchmarks to evaluate interpretability, studying the associations of brain age estimates with various clinical and genetic markers of dementia is warranted to better understand the biological plausibility of the two choices. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their clarification. I believe that this paper provide good novelty from cognitive science perspective, but I am still quite concerned if the finding given in this paper is robust. It is quite unfortunate that no baseline was provided, therefore it is still hard to convince myself what the gain is. The authors strongly argue that all previous approaches focus on predicting chronological age prediction which is a false metric, and thus it would have been nice if new findings from this work over the previous approaches had been presented side by side. I agree with many of the points that Rev m6iX has raised, and I would like to keep my score for now. --- Reply to Comment 1.1.1: Title: Revised response to Reviewer n5r3 Comment: *Note: After further discussion among the authors, we have expanded our response to Reviewer n5r3. The previous response is included and the revisions are included under the **Edit** section.* We thank the reviewer for their acknowledgement of the key arguments in our response and appreciate their feedback. Our claims of robustness rely on our observation that 100 distinctly trained VNN models isolated certain brain regions as contributors to elevated brain age gap consistently on multiple datasets of distinct dimensionalities. We also clarify that our arguments pertain to the insufficiency of the performance of chronological age prediction as a metric for assessing the quality of brain age estimate, while not discounting the relevance of chronological age in this application. More specifically, the model must learn the information about healthy aging from chronological age data but the performance achieved on this task is an incomplete metric to assess whether the model is able to provide a biologically plausible brain age estimate in neurodegeneration. **Edit.** We understand the reviewer's concern regarding the robustness of our results, which we believe to focus on investigating whether our results are spurious -- or "robust" as the reviewer states. Our experiments were very much focused to tackle this aspect convincingly and hence, we believe that we have more common ground with the reviewer than it seems. Where we depart from the reviewer is in the use of baselines to establish this robustness. As much as we would like to provide these comparisons, we can't think of any comparison that would be fair and meaningful. Rather, we attempt to corroborate robustness by **interpretability** and **generalization**. Indeed, the connection between VNNs and eigenvectors of anatomical covariance matrix allows us to identify brain regions that are responsible for elevated brain age. These regions turn out to have clinical significance. And we have further demonstrated that a VNN that we train on one specific dataset generalizes to different resolutions and different datasets. *This is a strong indication that the features we are extracting are not spurious.* This is perhaps not **the** way in which the reviewer would like us to show that our results are not spurious but is certainly **a** way of showing so. At the very least, we think that we can all agree that, in the words of Reviewer m6iX, *"there is a message here that will be of interest to the field."*
Summary: The authors aim is to investigate the application of coVariance Neural Networks (VNNs) to brain age prediction. They train and test VNNs on the OASIS-3 brain dataset with additional data of Alzheimer's disease and cortical thickness. The results indicate an association between the biomarker (brain age prediction) and a more established biomarker (cortical thickness). Strengths: In this paper, the authors approach an interesting problem with many practical applications: interpretable brain age predictions. The paper is well written and organised, making it accessible to a broad audience. The introduction effectively describes the problem and highlights its significance, while the literature review provides a broad view of the field and related fields. The methodology section is detailed, providing a clear explanation of the techniques used and has a deep description of the previously-published approaches used, which is helpful for the reader. It might have benefitted the paper to see more thorough discussion of the rationale behind the chosen methods. The discussion provides a good summary of the research findings and their implications. (However, these sections could be expanded to include a more comprehensive discussion of the potential applications and limitations of the study, as well as directions for future research). Weaknesses: Alongside the strengths discussed above, there are a number of weaknesses that must also be acknowledged or addressed. Notably: - A significant portion of the paper is devoted to introducing and discussing VNNs, which have already been published - There is a lack of references and discussion to previously published methods for interpreting brain age predictions. [1, 2, 3] just to name a few examples. How does this method differ from previous approaches? Does this method corroborate or pick up new regions of interest? The second reference [2] appears in the list of references in the paper but it not discussed anywhere. - Lack of results from the contributing brain regions. I would have liked to see the a table or figure with all brain regions and their associations. The VNN section can be reduced to create space for this. - The applications might not be directly obvious to those outside the field or on the periphery. A conclusion paragraph to discuss the impact of this work and future steps would help place it the current state of the world. [1] Hofmann, S. M., Beyer, F., Lapuschkin, S., Goltermann, O., Loeffler, M., Müller, K. R., ... & Witte, A. V. (2022). Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain. NeuroImage, 261, 119504. [2] Lee, J., Burkett, B. J., Min, H. K., Senjem, M. L., Lundt, E. S., Botha, H., ... & Jones, D. T. (2022). Deep learning-based brain age prediction in normal aging and dementia. Nature Aging, 2(5), 412-424. [3] Kolbeinsson, A., Filippi, S., Panagakis, Y., Matthews, P. M., Elliott, P., Dehghan, A., & Tzoulaki, I. (2020). Accelerated MRI-predicted brain ageing and its associations with cardiometabolic and brain disorders. Scientific Reports, 10(1), 19940. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How is the alignment of images done? Could bias be introduced here? - Is the age-correction in 3.3 performed on the train or test group? It sounds like the parameters are obtained from the test group, which can lead to overfitting - In line 42-43 appears the sentence: "Thus, there is a lack of conceptual clarity in the role of training to predict chronological age of healthy controls in predicting a meaningful ∆-age [19].". How does this statement reflect the work being performed here? Surely there are points of discussion worth addressing. - Why train cortical thickness only on healthy group? Particularly in light of the previous question. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitation discussion is more a discussion on the general field of brain age predictions rather than the specific limitations of this approach. It does not demonstrate the authors’ awareness of the limitations of the analysis. There is no discussion on or consideration made to ethical concerns. Although this work has many benevolent applications, any system that can predict a person’s age, or interpret the prediction in an anatomic way, can be abused in potentially unethical or dubious ways. Using the output to refuse refugee applications or increase insurance premiums being two examples. Although I don't think a separate ethics review is needed, I encourage the authors to show awareness of possible misuse of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Novelty, conceptual contributions, and comparisons with existing methods. Here, we clarify the lack of conceptual clarity associated with chronological age in brain age prediction, motivation for using VNNs, and the comparisons with existing brain age approaches. These aspects are intertwined and, hence, responded to jointly. **Limitations of 'performance'.** Many existing brain age algorithms primarily focus primarily on the performance in chronological age prediction task. However, improved performance over chronological age prediction task cannot ensure improved biological plausibility of the brain age estimate (see [a] below). Hence, it’s imperative to have explainability of the brain age estimate as the key metric for assessment of any algorithm in this domain. **Choice of VNNs for brain age prediction.** To start with, the interpretability offered by VNNs facilitates a convincing evaluation of the biological plausibility of brain age estimates. Furthermore, *VNNs provide methodological clarity to all aspects of brain age prediction*. Specifically, VNNs learning 'something significant' from the chronological age data as regression models is a necessary first step for attaining robust anatomical interpretability. This interpretability hinges on exploitation of specific eigenvectors of the anatomical covariance matrix. The VNNs learn to exploit these eigenvectors when trained on the task of chronological age prediction. Further, the age-bias correction step is limited to projecting the VNN output onto a space where brain age can be compared meaningfully to chronological age. **Comparison with existing approaches.** The studies suggested by the reviewer broadly fit into the category of deep learning methods with post-hoc explainability and will be added to the literature review. We note that such approaches do not rigorously account for the limitations of post-hoc explainability, some of which include: (i) instability to small perturbations to the input, (ii) inconsistent results for different variations of training algorithms, and (iii) model multiplicity (i.e., when multiple models with similar performance may exist but offer distinct explanations). In contrast, VNNs provide a transparent learning model with no added computational cost for explainability. **Interpretability offered by VNNs.** Our analysis identified the following regions as prominent contributors to elevated brain age gap in AD: entorhinal, superior temporal, temporal pole, and subcallosal. Among these, entorhinal is implicated in earlier stages of AD according to Braak staging criteria, and others implicated in later stages of AD. For instance, the study in [b] implicates regions in temporal lobe among prominent contributors to brain age in MCI subjects. **[a]** Jirsaraie, et. al, A systematic review of multimodal brain age studies: Uncovering a divergence between model accuracy and utility. Patterns, 4(4), 2023. **[b]** Ran, Chen, et al. "Brain age vector: A measure of brain aging with enhanced neurodegenerative disorder specificity." Human brain mapping, 2022. ### Training on healthy group. A key biological feature of AD is manifestation of biological characteristics that signify accelerated aging relative to healthy aging. Hence, the models were trained to learn characteristics of healthy aging and deployed on the AD cohort to detect accelerated aging. ### Age-correction and additional empirical evidence. Indeed, the models were trained on the HC group, although the AD+ group was unseen. To address the concern regarding overfitting on HC group and further provide evidence of the generalization of our results, we leveraged the models trained on OASIS-3 to predict brain age in ADNI1 dataset (demographics and results included in the pdf file attached with the global response). We observed larger brain age gap in dementia with similar brain regions being implicated as in the results on the OASIS-3 dataset. ### Impact and future work. An immediate future direction is to explore the associations of brain age gap and regional residuals with clinical and biological markers of AD. Utility of other cortical features (such as volume or area) in brain age prediction can also be explored. Furthermore, the VNN based explainability framework is also a potentially impactful data analysis approach. Specifically, we have demonstrated the connection between the inference outcome and eigenvectors of the underlying covariance matrix while validating the key property of stability and reproducibility of findings. ### Limitations of analysis. Our analysis is limited to older individuals, and a dataset with more diverse age groups is expected to provide holistic information for brain age. Isolation of brain regions contributing more to brain age in AD than HC hinges on a binary group comparison. Such a comparison can be impacted by the composition of the dataset (for instance, a skewed dataset may not provide informative results). We will discuss this aspect in the limitations section. ### Results from the contributing brain regions. Fig. 11 in the supplementary material provides a subset of this data for one model. We will compile this data from all 100 VNN models in the supplementary file. ### Description of VNNs. As per the suggestion, we will optimize this section further by moving some content to the supplementary file. ### Alignment of images. OASIS-3 dataset was processed via Freesurfer pipeline, and we use the derived cortical thickness data available online in the data repository. The relevant processing details are provided in the data dictionary document for this dataset (available online). A pertinent detail therein is that all FreeSurfer outputs were quality checked for errors in segmentation before upload. Hence, we did not expect the bias due to alignment in our results. ### Societal concerns. Thank you for raising the relevant societal concerns. We will add these to our discussion. --- Rebuttal Comment 1.1: Title: Reply to authors Comment: I thank the reviewers for taking the time to reply to my review and for taking it into consideration. I am conflicted about the work. The authors have answered many of my questions but I still would have liked to see more connections to existing work on a discussion level. Explainability can be enigmatic and a quantitative comparison with other methods might not be meaningful. There are also minor things that were left unaddressed such as works listed in the references not being cited anywhere in the paper (I suspect this might be because the authors copied the references over from the supplementary but I would suggest keeping them separate for clarity). Having said that there is a message here that will be of interest to the field and the additions of related work and improved discussion on limitations improves its value. Therefore I am prepared to improve my rating from 4 to 5. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their feedback and are encouraged by their appreciation of the message of our work. Certain references were indeed a part of the Relevant Literature section in the supplementary material. We will integrate the reviewer's suggestions on the inclusion of more comprehensive discussions relative to existing works and better communication of the limitations.
Summary: The paper leverages coVariance neural networks (VNNs) for brain age gap prediction in a principled statistical fashion. The paper focuses on the specific case of training on a healthy control and evaluating the gap on people with Alzheimer's disease. ------------------------------------ EDIT AFTER REBUTTAL PERIOD: I will increase the score I gave to this paper from 6 to 7 (accept), and increase the Soundness and Contribution scores from 3 to 4. I'm more confident about the relevance of this work compared to when I first reviewed it, and I hope the remaining reviewers can engage in this discussion too. We still seem to disagree on the need for performance-based baselines. Strengths: The paper is well written and organised. In contrast to other relevant methods in the field that need post-hoc approaches for explainability, this paper is able to offer a transparent statistical approach given the regional expressivity of the VNN architecture. This is a key and very relevant strength of the paper, given the gap in the literature and the need for interpretable methods if one really wants to have useful machine learning applications in healthcare practice. It is particularly good that the paper demonstrates that the age gap differences between AD+ and HC groups were not driven by age or sex differences. Weaknesses: 1. I think the paper lacks a comparison of their method with other baselines. I understand that the contribution is about the principled statistical method, but without a comparison with previous methods it is difficult to understand how good the method is at least for the dataset analysed. It is highlighted in section 4.1 that other DL methods achieve better MAEs; how much is this difference in the case analysed in this paper? If the difference is too high, the interpretability advantages of a method are lost in practice, because if the method is way worse in predicting age gap, it is of lesser importance that we can interpret the results. I am aware, as the paper defends, that a very accurate method might not exactly be the best, but this information is important for a proper contribution analysis with the tools that we have available. 2. The fact that this paper only uses one main single dataset to evaluate their method is a key weakness. As I further ask in the Questions section, why haven't the authors used another dataset, like the Human Connectome Project (HCP), for a more wider evaluation of the relevance of this work? 3. The Pearson's correlation achieved (as stated in section 4.1) is very close to 0.5 which raises the question of how much the model actually learned. Furthermore, the MAE of above 5 seems to be in the quantile range of figure 2b, thus being in the range of where most of the values are actually predicted anyway. 4. Small typo in Discussion section: "near-prefect". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. I'm finding a bit confusing to evaluate the contribution of this paper given that VNNs were previously presented in literature, and therefore I get the impression that the actual contribution is more about the added explainability analysis; however, most part of section 4 is about the results of the VNNs not necessarily taking into account the interpretable part. Can the authors more clearly explain the differences of this work to previously literature? 2. In section 2.2 the paper states that a particular useful characteristic of VNNs is that they can process a dataset of an arbitrary dimensionality. Wouldn't it be useful then to show the method applied to different datasets with a different number of features? Or, even just the same dataset in which the number of features are different? 3. I question the choice of healthy control dataset to train the VNN. The OASIS-3 dataset has a mean age of 68 years old, which might raise the question of how "healthy" this control actually is. Why didn't the authors use another dataset with a healthier cohort, like the Human Connectome Project (HCP)? 4. Is there any particular reason for the authors to only focus on cortical thicknesses for age prediction? Have the authors tried to include other measures (eg volume/area) and see whether the model performance increases? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations of this work are stated in the paper at different places, including at the end. No potential negative societal impact of this work seems to have been discussed. I'm thinking that a wrong prediction of a brain age gap (given implications for neurodegenerative diseases) could potentially imply over-medication in the case where the brain age gap predicted is wider than reality, as just one example. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Contribution and novelty. To understand the contributions relative to other studies in brain age prediction application, the following aspect must be recognized explicitly. **Fallacies of focusing on performance.** The performance on chronological age prediction is a flawed metric for assessing the quality of brain age estimate, as it cannot provide clarity on the following aspects: - Does better performance on predicting chronological age correlate with a more *useful* estimate of brain age? - Are all models that achieve a specific mean prediction error (say 1 year) on the chronological age prediction task *equivalent* in terms of their ability to predict a meaningful brain age in adverse health conditions? A recent study of several existing brain age prediction frameworks has revealed that the accuracy achieved on the chronological age prediction task may not correlate with their clinical utility (see [a]). Further, the age-bias correction step in brain age evaluation procedure accounts for any inaccuracy in the chronological age prediction (whether the Pearson’s correlation between estimated chronological age and ground truth is 0.9 or 0.5). (Here, we remark that we do not advocate for an arbitrarily suboptimal training of a given model on the chronological age prediction task.) These observations enable the question: What is the appropriate metric to assess validity of brain age prediction algorithms? **Conceptual contributions.** To start with, the interpretability offered by VNNs facilitates a convincing evaluation of the biological plausibility of brain age estimates. Furthermore, *VNNs provide methodological clarity to all aspects of brain age prediction*. Specifically, VNNs learning 'something significant' from the chronological age data as regression models is a necessary first step for attaining robust anatomical interpretability (Appendix H.2 in supplementary file). This interpretability hinges on exploitation of specific eigenvectors of the anatomical covariance matrix. Notably, the VNNs learn to exploit these eigenvectors when trained on the task of chronological age prediction (Fig. 13 in supplementary file). Further, the age-bias correction step is limited to projecting the VNN output onto a space where brain age can be compared meaningfully to chronological age. **We will incorporate the above discussion and replace Fig. 3 in the main paper with Fig. 9 from supplementary material to better communicate the explainability feature of VNNs.** **Comparison with existing approaches.** The meaningful comparison with existing approaches is provided as follows. 1. *Comparison with deep learning methods with post-hoc explainability.* Limited studies have utilized state-of-the-art post-hoc, model-agnostic methods such as SHAP or LIME and saliency maps to add explainability to their brain age estimation approaches. However, explainability offered by such approaches may be unstable to small perturbations in data, inconsistent to variations in training algorithms and model multiplicity (i.e., when multiple models with similar performance may exist but offer distinct explanations), and computationally expensive [b,c,d]. In this context, interpretability is an inherent feature of VNNs, which comes with no significant computational cost. 2. *Comparison with traditional statistical models.* A PCA-based regression model is a standard method and one of the most appropriate traditional model for comparison. However, this method obfuscates the anatomical interpretability of individual anatomic features. Importantly, it can be prone to instability due to small perturbations in the principal components and, hence, non-reproducible. Unlike this statistical model, VNNs offer theoretical stability guarantees on their performance that have been demonstrated empirically (see [e] and Appendix K). 3. *Baseline performance by VNNs with perceptron as readout.* We can, of course, artificially improve the VNN performance in chronological age prediction task further by the use of adaptive readout layer in VNNs (see Appendix J in supplementary file; where error of 4.17 years was achieved by VNNs with a perceptron as a readout layer). However, this modification inhibits the anatomical interpretability and the scale-free property of VNNs. [a] Jirsaraie, et. al, A systematic review of multimodal brain age studies: Uncovering a divergence between model accuracy and utility. Patterns, 4(4), 2023. [b] A.-K. Dombrowski, et. al, “Explanations can be manipulated and geometry is to blame,” NeurIPS, 2019. [c] J. Adebayo, et. al, “Sanity checks for saliency maps,” NeurIPS, 2018. [d] E. Black, et. al, “Model multiplicity: Opportunities, concerns, and solutions,” ACM Conf. on Fairness, Accountability, and Transparency, 2022. [e] Sihag, et. al, coVariance neural networks. NeurIPS 2022. ### Additional empirical evidence. We have cross-validated the findings on another dataset (ADNI1; see global response). ### Choice of dataset. Datasets in AD studies typically focus on older healthy controls who had been clinically screened. In both OASIS-3 and ADNI, controls were age-matched with the respective AD cohorts. In principle, a dataset that represents the adult lifespan may provide more holistic information about healthy aging. Other choices of features, such as cortical area and volume, can certainly provide more insights into brain age and VNNs are indeed applicable to them. We will mention these aspects as immediate future directions of our work. ### Scale-free property of VNNs. The suggestion to show the VNNs applied to datasets with different number of features is highly relevant. In **Fig. S2** in the pdf file attached with the global response, we demonstrate the transferability of VNNs, where we report consistent interpretability patterns for brain age on datasets of different dimensionalities. ### Limitations. Thanks for this suggestion. We will add it in Limitations section. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed answers to my review, and for the global answer to all reviewers. The experiments on the extra dataset are very very good to illustrate the relevance of this paper. I agree with the highlighted advantages of the paper that the authors wrote in their rebuttal, but I believe some of my points were still not sufficiently tackled. With regards to the weaknesses that I defined, I still think that the lack of comparison with baselines is a weakness of this work. I think I understand that current metrics of performance for brain age gap prediction might not be the best, but it's still a common practice in the literature. The fact that the authors are only able to cite one previous paper to support this decision illustrates my point: yes, ref [a] shows that we might need to rethink how we do research in this area, but I still believe that information on current available metrics is important for a proper analysis with the tools that we have available. If there's only one previous work highlighting this issue I don't think the correct approach is to blindly follow it. I argue that the correct approach for a high-impact paper should be to show experiments and connect with this previous work. In the wider context of the literature, I argue that showing these comparisons will give a wider picture of how your method compare with previous ones and how they can relate with ref [a]'s concerns. In this sense, the fact that the Pearson's correlation achieved (as stated in section 4.1) is very close to 0.5 still raises the question of how much the model actually learned. I understand that this might be a suboptimal metric, but isn't it a bad indication that it's so close to 0.5? Being a suboptimal metric doesn't mean it doesn't illustrate some information for us to understand the model. It also seems to me that the authors did not answer my questions 1 and 4. For question 1, I'm a bit confused about the contributions of this paper given that VNNs were introduced in previous work, so I'd like to ask for a clarification on the different contributions of this work when compared to the previous ones. For question 4, I understand that authors defend their method can be easily extended, but I was trying to look for the reasoning that the authors had for choosing only cortical thicknesses. I'm sorry if I missed this in your answers, but these questions still seem not answered to me. I look forward to the discussion with the other reviewers, as I think they raised important points and would like to know whether they think the authors sufficiently tackled their concerns. --- Reply to Comment 1.1.1: Title: Follow-up to Reviewer mFGa's comments Comment: We thank the reviewer for their valuable feedback to our rebuttal. Our discussion below first addresses Q.1 of the reviewer, where we provide further clarifications on how our work is positioned relative to the existing works in brain age prediction domain, while also addressing their comments regarding the relevance of performance. We hope that the reviewer will agree that an elevated brain age for an individual with neurodegeneration, by itself, does not add much clinical utility beyond validating what a clinician can already observe using a variety of other biomarkers (for e.g., NfL). The focus on ‘chronological age prediction performance’ is indeed a common practice for the domain of machine learning (ML) driven brain age prediction, and many existing algorithms already show elevated brain age gaps for a variety of phenotypes. Existing literature provides sufficient evidence of many sophisticated deep learning models, with thousands to millions of learnable parameters, that can readily achieve a very high accuracy on the regression task of predicting chronological age. In this context, *we argue there is no significant methodological innovation to be made in demonstrating accurate predictions for chronological age$^1$*. **To put it simply, a key focus of this paper is not on the accuracy in predicting chronological age, but rather ‘what properties does a VNN gain when it is exposed to the information provided by chronological age of healthy controls’ and ‘whether these properties lead to an informative brain age estimate in AD’.** While highly relevant, most existing studies fail to tackle these aspects convincingly or even consider them. Our experiments have demonstrated that the information gleaned by VNNs from chronological age of healthy controls$^2$ is sufficient to estimate brain age in AD that is biologically plausible (as it isolates the brain regions characteristic of AD as contributors to elevated brain age gap). *Hence, we provide a methodologically holistic perspective to brain age prediction task, to which we do not find any appropriate benchmarks to compare to in the existing literature.* It could be the case that improving the performance on chronological age prediction beyond that achieved by VNNs indeed improves this biological plausibility in some form. However, there has not been appropriate focus on this aspect in the literature and to this end, we provide a potential benchmark on how to investigate this from a methodological perspective for future work in this domain. The reviewer rightly points out the sparsity in literature that argues for a decoupling of brain age from performance on chronological age prediction. Besides the study [a], a previous study (see Bashyam et al. published in Brain, 2020) also showed evidence of moderately fitted models on chronological age achieving a more informative brain age on a large dataset. However, a 'moderate' fit is hard to define concretely when the model can achieve a much improved performance on the task it is being trained for. *The skewed focus on performance on chronological age prediction task in the brain age domain is a by-product of the lack of ‘conclusively’ explainable models$^3$*. We hope that our work will help bring appropriate attention to the explainability of brain age prediction models, as it is paramount for the practical utility of ML derived brain age estimates. **We hope that the discussion above answers Q.1 raised by the reviewer as we have argued that the major contributions of our work are both conceptual and methodological, where we not only bring up various technical and conceptual shortcomings in brain age approaches that inhibit their practical utility, but also provide a potential solution in the form of VNNs.** We are happy to answer any further concerns in this regard. **Reason for focusing on cortical thickness.** Among the measures of thickness, volume, and area, cortical volume data is the most likely to be biased by the head size of individuals in the dataset, while cortical thickness is the least. Also, by limiting our analysis to only one modality (thickness in this case), the scope of our analysis was limited to 148 features. From a statistical perspective, this choice kept the ratio between the number of data samples available to the number of features to be the maximum possible while capturing the information across the cortex. $^1$ As observed in Appendix J, increasing the number of learnable parameters by about 10 times improves the chronological age prediction performance. However, these modifications take away our ability to comment on anatomical interpretability of VNNs and hence, do not add anything insightful to the task of brain age prediction. $^2$ Figure 7 in supplementary material adds to discussion in Section 4.1 in this regard. $^3$We have argued previously that post-hoc explainability is not conclusive or robust if shown to exist only on one instance.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their insightful feedback and appreciation of our work. We have provided individual responses to all reviewers. In this global response, we highlight the major advantages offered by VNNs over existing approaches and a summary of our response to two common concerns raised by the reviewers. *Also, the attached pdf file provides the results for cross-validation of brain age results on another public dataset on Alzheimer's disease (AD).* To begin with, we highlight the following two *strengths of VNNs as a data analysis tool*. - **Inherent explainability.** Most existing deep learning models offer explainability via post-hoc, model agnostic approaches. Such approaches are vulnerable to inconsistent results due to a range of factors and need extensive computation. In contrast, explainability is an *inherent feature* of VNN models and is straightforward to infer. - **Scale-free characteristic of VNNs for cross-validation.** Because VNNs are scale-free and can process a dataset of arbitrary dimensionality, it is feasible to cross-validate the inference over datasets that may have a different number of features without any re-training or modifications to the VNN. This is a relevant feature for data analysis in neuroimaging where datasets describing the same phenomenon can have different dimensionalities. In the context of brain age, we leverage the scale-free property of VNNs to cross-validate anatomical interpretability associated with elevated brain age gap in the ADNI1 dataset (see **Fig. S2** in the attached pdf file), where we observed spatially consistent patterns associated with elevated brain age gap in AD across datasets of dimensions 100, 200, and 400 using VNNs that had been trained only on XYZ dataset (dimension:100; see supplementary file). Next, we emphasize on the following *contribution of VNNs to the application of brain age prediction*. - **Methodological clarity to brain age estimate.** Our results demonstrate that VNNs provide methodological clarity to all major aspects of brain age prediction. Specifically, VNNs learning 'something significant' from the chronological age data as regression models is a necessary first step for attaining robust anatomical interpretability (corroborated by additional results in Fig. 13 in supplementary material). Furthermore, the VNNs learn to exploit the eigenvectors of anatomical covariance matrix in a certain manner when trained on the task of chronological age prediction. Importantly, the anatomical interpretability offered by VNNs to the brain age gap hinges on exploitation of specific eigenvectors of the anatomical covariance matrix. Thus, elevated brain age gap in a population with adverse health condition can be linked to specific brain regions and the eigenvectors of the anatomical covariance matrix. Further, the analysis of regional residuals revealed that the utility of the age-bias correction step is limited to projecting the VNN output onto a space where a clinician can observe brain age relative to chronological age. There were two common **concerns raised by the reviewers**. Our response to these concerns can be summarized as follows. - **Lack of empirical evidence on additional data.** To address this, we have provided results on a dataset of 47 controls, 71 individuals with MCI, and 33 individuals with dementia from the standardized 3.0 T ADNI1 dataset (see [*] for details) from the wider ADNI database in the attached pdf file. We chose this standardized dataset in the spirit of reproducibility and to avoid selection bias. *Cross-validation using models trained on OASIS-3.* The results in **Fig. S1** were obtained from the models that were trained on OASIS-3 dataset. The observations on ADNI1 were highly consistent with those made in the paper. Specifically, in ADNI1 dataset, brain age gap was significantly higher for individuals with dementia as compared to healthy controls, with MCI in between them. Also, the subcallosal, entorhinal, temporal pole, and superior temporal regions were cross-validated on this cohort as being contributors to elevated brain age gap in the combined cohort of dementia and MCI (equivalent to AD+ group from OASIS-3). - **Novelty, conceptual contributions, and comparisons with existing methods.** As highlighted above, (i) VNNs provide significant methodological clarity to major components of brain age prediction, and (ii) the explainability offered by VNNs is more robust and stable than that offered by existing approaches. We have responded to this concern, in part, in individual responses to all reviewers. The most extensive response has been provided in the comment titled ‘Contributions and novelty’ in response to the reviewer mFGa. [*] Wyman BT et.al, Standardization of analysis sets for reporting results from ADNI MRI data. Alzheimers Dement. 2013. If our response has addressed your concerns, we will greatly appreciate it if you could re-evaluate your rating of the paper accordingly. We are happy to address any further concerns that may arise during the reviewer-author discussion period. **Acknowledgement.** Data used in the preparation of the results for ADNI1 dataset were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). Pdf: /pdf/650e6071650cc82c61979c9776922056fb67eeba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Estimating the Rate-Distortion Function by Wasserstein Gradient Descent
Accept (poster)
Summary: This paper describes the elegant connection between entropic OT and estimation of rate-distortion functions, as well as proposes a novel algorithm based on Wasserstein Gradient Descent (WGD). It outlines prior methods (e.g. Blahut-Arimoto, NERD) in the same (or similar) language, and performs WGD on the two objective functions of interest, detailing the gradient calculations, and provides numerical experiments comparing the methods to prior works in the literature. In these experiments, we see that WGD is able to provide good bounds on the rate distortion function Strengths: This paper wrote down a very elegant connection between rate distortions + entropic optimal transport -- as someone who studies the latter, I think this is great. I think it's surprising this connection has not been made explicit between the two communities. Their proposed method is also quite insightful. Numerical and statistical results are convincing as well. Weaknesses: There is minimal technical novelty in this paper, though I think that's completely fine given the newfound connections the authors make bridging two academic communities. I think the references are a bit thin, especially on the level of the Wasserstein gradient for $\mathcal{L}_{EOT}$. A recent paper, for example, is [Rigollet2022Sample], as well as some of the papers it cites, and the papers that cite it. @article{rigollet2022sample, title={On the sample complexity of entropic optimal transport}, author={Rigollet, Philippe and Stromme, Austin J}, journal={arXiv preprint arXiv:2206.13472}, year={2022} } It is also worth pointing out that this constant "$C$" in (15) of the text scales exponentially poorly with the regularization parameter $\epsilon$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would it be interesting to analyze the "debiased" entropic OT dynamics for this problem as in e.g. [Pooladian2022Debiaser] and [Feydy2019Interpolating]? @inproceedings{pooladian2022debiaser, title={Debiaser beware: Pitfalls of centering regularized transport maps}, author={Pooladian, Aram-Alexandre and Cuturi, Marco and Niles-Weed, Jonathan}, booktitle={International Conference on Machine Learning}, pages={17830--17847}, year={2022}, organization={PMLR} } @inproceedings{feydy2019interpolating, title={Interpolating between optimal transport and mmd using sinkhorn divergences}, author={Feydy, Jean and S{\'e}journ{\'e}, Thibault and Vialard, Fran{\c{c}}ois-Xavier and Amari, Shun-ichi and Trouv{\'e}, Alain and Peyr{\'e}, Gabriel}, booktitle={The 22nd International Conference on Artificial Intelligence and Statistics}, pages={2681--2690}, year={2019}, organization={PMLR} } Minor comments: L33: “R-D” function instead of $R(D)$ (L86, L99, L101, L110, …) [just to be consistent] L221: “optimize” Missing \mathbb{E} in the statement of Proposition 4.3? You also could write this as one line, saving a lot of space (since the rates are basically the same for all the statements). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed and insightful comments. We have corrected all the typographical issues as you suggested, and address all your remaining concerns and questions below. > I think the references are a bit thin, especially on the level of the Wasserstein gradient for $\mathcal{L}\_{EOT}$. Thank you for pointing us to the more recent references, which we have included in our revision. Particularly, we added more recent papers like [Chizat 2022] and [Yan et al. 2023] in relation to gradient flows for $\mathcal{L}\_{EOT}$ and $\mathcal{L}\_{BA}$ . We further added references to [Genevay et al, 2019] and [Rigollet and Stromme, 2022] in relation to the sample complexity result. The result of [Rigollet and Stromme, 2022] is particularly interesting because it could be used to derive a version of our Proposition 4.3 with distortion functions other than the quadratic. > It is also worth pointing out that this constant "C" in (15) of the text scales exponentially poorly with the regularization parameter This is true. However, unlike the case where one uses EOT as an approximation of OT, in the compression problem we are not only interested in the small regularization regime. The setting of low rate and high distortion (corresponding to large entropic regularization) has received increasing interest from the generative modeling community [Mentzer et al., 2020; Yang et al., 2023], where techniques such as GANs and diffusion models allow compression algorithms to achieve realistic image reconstructions with extremely low bit-rates, at the cost of a large squared-error distortion. In that sense, the poor scaling of C is less problematic for R-D estimation than for other applications of EOT. > Questions: > Would it be interesting to analyze the "debiased" entropic OT? We focus on the “biased” entropic OT distance since it has the exact mathematical form of the lossy compression / R-D problem. The “debiased” entropic OT distances may also admit information-theoretic interpretations and would be interesting to investigate in future work. > Missing \mathbb{E} in the statement of Proposition 4.3? The first line of Proposition 4.3 does not have an expectation sign because there the LHS is already deterministic. The following lines involve the $m$-sample empirical measure $\mu^m$, which is random and we take expectation over the samples. ---- **References** Lénaïc Chizat. Mean-field langevin dynamics: Exponential convergence and annealing. arXiv:2202.01009, 2022 Yuling Yan, Kaizheng Wang, and Philippe Rigollet. Learning gaussian mixtures using the wasserstein-fisher-rao gradient flow. arXiv:2301.01766, 2023 Aude Genevay, Lénaic Chizat, Francis Bach, Marco Cuturi, and Gabriel Peyré. Sample complexity of Sinkhorn divergences. AISTATS, 2019 Philippe Rigollet and Austin J Stromme. On the sample complexity of entropic optimal transport. arXiv:2206.13472, 2022 Mentzer, F., Toderici, G. D., Tschannen, M., and Agustsson, E. High-fidelity generative image compression. Advances in Neural Information Processing Systems, 33:11913–11924, 2020. Yang, Y., Mandt, S., and Theis, L. An introduction to neural data compression. arXiv:2202.06533,2022
Summary: The authors propose a novel method based on Wasserstein Gradient Descent for numerically computing the Rate-Distortion function. Judging from the experiments their method seems to achieve comparable performance to other state of the art methods while having lower computational cost and being conceptually simpler, which reduces the need for tuning (of neural network architectures). Strengths: - The method is novel and well motivated - Good choice of numerical experiments Weaknesses: Lack of theoretical analysis/guarantees (see also questions) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some questions on the theory side: - In (15) in line 202, should there be an inequality between the first and second line? - Should the integral in Lemma 4.2 not be w.r.t \nu^t and not \nu^k ? - I am not quite sure I correctly understand the implications of Proposition 4.3. How can anything be implied about WGD if we do not know that WGD converges to a global optimizer i.e. how does WGD relate to \min considered in the bound of Proposition 4.3 ? To me right now it looks like a theoretical heuristic. Small Typos: - Line 172: baised should be biased - Line 221: optimie should be optimize - Line 302: Should "articles" be particles? - Line 389: One of the authors is shown as Le?nai?c (not sure if this might be caused on my end) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be interesting to have more theoretical guarantees for this method and/or more experiments on other realistic data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful review. We have incorporated all the typographical/notational suggestions as well as added a new experiment on MNIST (see PDF attached to the main rebuttal). Below we go over all the concerns and how we addressed them in our updated manuscript. > In (15) in line 202, should there be an inequality between the first and second line? There should have been a comma; fixed. > Should the integral in Lemma 4.2 not be w.r.t \nu^t and not \nu^k ? Yes; fixed. > I am not quite sure I correctly understand the implications of Proposition 4.3. How can anything be implied about WGD if we do not know that WGD converges to a global optimizer i.e. how does WGD relate to \min considered in the bound of Proposition 4.3 ? To me right now it looks like a theoretical heuristic. Proposition 4.3 is a statement about the approximation capabilities of the extremum estimator $\min\_{\nu \in \mathcal{P}\_n} \mathcal{L}(\nu)$ rather than the actual algorithm for computing it. We have now made this aspect clearer in the updated manuscript. In statistics, it is common to consider the theoretical merit (e.g., consistency, bias) of an estimator, such as the MLE, separately from the computational procedure for obtaining it; similarly, machine learning often studies the approximation ability of a hypothesis class (such as neural networks) separately from the training problem (e.g., the numerics of SGD). Our Proposition 4.3 is a statement of the former kind. It assures the consistency of the R-D estimator $\min_{\nu \in \mathcal{P}_n} \mathcal{L}(\nu)$ (defined by "the best R-D loss achievable over all reproduction measures supported on at most $n$ points") and further quantifies its rate of convergence in terms of: 1). the number of source samples ($m$), characterizing its statistical efficiency; and 2). the number of particles used ($n$), which implies a statement of universal approximation analogous to that of neural networks. While the *asymptotic* consistency of such an R-D estimator has been shown in information theory literature [Harrison and Kontoyiannis, 2008], we are not aware of any previous *finite-sample* bounds like ours. ---- **References** Matthew T. Harrison and Ioannis Kontoyiannis. Estimation of the rate–distortion function. IEEE Transactions on Information Theory, 54(8):3757–3762, 2008. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clear response to my questions which have addressed my concerns. In accordance I have slightly increased the evaluation.
Summary: The paper proposes an algorithm for estimating the rate-distortion function for (possibly) continuous sources. This is done by presenting the R-D as an entropy-regularized optimal transport problem, and solving via Wasserstein GD. The source distribution is approximated by empirical distributions. This yields an upper bound on the true R,D. Some latest results from EOT are adapted in order to bound the theoretical deviation from the real R-D for sub-Gaussian sources. The convergence of the method is demonstrated through simulations. Strengths: The proposed method seems novel, and theoretical contribution about sample complexity (Prop. 4.3) seems important. Weaknesses: My concerns here are twofold: 1. First, the proposed Algorithm outputs the marginal distribution of the reproduction. This allows to compute an upper bound for the R-D function (as described in Sec. 4.4), but only where the Blahut-Arimoto step (11) is tractable, which is a major drawback. It might also incur in an additional error (beyond the result of Prop. 4.3), which is not discussed. 2. Second, and more crucial, Regarding the clarity of the text itself. Writing is *very* unclear and not well-organized. Just to name a few issues: The definition for the Wasserstein gradient is not clear, and it is unclear how to compute it. $\psi^t$ is used sometimes for a gradient, and sometimes for a potential (whose calculation is also not explained). Eq. (15) is missing some relation sign (probably $\leq$). In Lemma 4.2, $k$ is not defined (a typo, maybe?). Many cases of "it is well known" or "we know" without any reference. Many more typos. I'm afraid this does not meet the quality requirements for NeuRIPS. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: The paper contains a decent theoretical contribution, but the writing needs a total makeover. I would recommend an acceptance only if this issue could be fixed for the final version, which I doubt. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: The Algorithm outputs only the marginal distribution of the reproduction and provides an upper bound for the R-D function only where the Blahut-Arimoto step is tractable. Neither this, nor other limitations, are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful suggestions, which have significantly improved our manuscript. Below we go over your concerns and detail how we have addressed them in our updated manuscript. > First, the proposed Algorithm outputs the marginal distribution of the reproduction. This allows to compute an upper bound for the R-D function (as described in Sec. 4.4), but only where the Blahut-Arimoto step (11) is tractable, which is a major drawback. It might also incur in an additional error (beyond the result of Prop. 4.3), which is not discussed. Our original writing appears to have been misleading; we have rephrased it to emphasize that the upper bound computation in our method is in fact tractable, and there is no additional error incurred. While the BA-like step (11) involves an integral w.r.t. the $\nu$ distribution, in our algorithm $\nu$ is represented as $n$ particles, so the integral reduces to a finite sum, and this step is tractable and no more expensive than one update in the WGD algorithm. Our original writing on line 242 (“provided the integral in the numerator can be computed exactly”) referred to the general setting which applied to both our method and NERD. Indeed, NERD uses a continuous $\nu$ and requires additional approximation for this computation (as explained on lines 244-248). > Second, and more crucial, Regarding the clarity of the text itself. Writing is very unclear and not well-organized... We sincerely apologize for the unclear writing, numerous typos and poor notation in our rushed write-up ahead of the submission deadline. We have since made extensive revisions to improve the writing and correct all errors. Going over the list of issues (also see overall response #1, 2): > The definition for the Wasserstein gradient is not clear, and it is unclear how to compute it. We revised our introduction of the Wasserstein gradient, separating the formal definition (Definition 4.1; as the Euclidean gradient of the first variation) from its linearization property (eq 15). We explain that the former gives us the computational recipe, while the latter characterization justifies its role as a bona fide gradient (i.e., “taking a small enough step along the Wasserstein gradient indeed decreases the loss functional”). We add references to [Ambrosio et al. 2008, Definition 10.1.1], [Chizat 2022, Lemma A.2] and [Carlier et al. 2022, Proposition 4.2] to precisely relate the definition and the linearization property in our setting. > $\psi^t$ is used sometimes for a gradient, and sometimes for a potential (whose calculation is also not explained). This was an unfortunate notation clash; we have now changed the notation for the Wasserstein gradient to $\Psi$. We improved the discussion around its calculation based on Def 4.1, clarifying that it comes from a straightforward differentiation of the Sinkhorn potential (i.e. the output of Sinkhorn’s algorithm) in the case of $\mathcal{L}\_{EOT}$, and we added in Section 4.1 an explicit formula in the case of $\mathcal{L}\_{BA}$ (originally given by eq. 23 in the Supplementary Material). > Eq. (15) is missing some relation sign (probably). We now added the missing comma. > In Lemma 4.2, k is not defined (a typo, maybe?). Typo indeed, corrected as $t$. > Many cases of "it is well known" or "we know" without any reference. We have eliminated all instances of "it is well known" or "we know" and replaced them with detailed references. **References** Lénaïc Chizat. Mean-field langevin dynamics: Exponential convergence and annealing. arXiv:2202.01009, 2022 Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows in metric spaces and in the space of probability measures. Guillaume Carlier, Lénaïc Chizat, and Maxime Laborde. Lipschitz continuity of the Schrödinger map in entropic optimal transport. arXiv:2210.00225, 2022 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and clarifications. I will recommend acceptance, but still insist that published version should be more readable. --- Reply to Comment 1.1.1: Title: Thank you for the positive re-evaluation; let us know how we address any remaining concerns Comment: We would like to thank reviewer **GB3T** for their positive re-evaluation. We believe the reviewer was mainly concerned about 1. the tractability of our upper bound estimate; 2. unclear presentation, especially the technical discussion around the Wasserstein gradient. On #1, we hope our explanation about the upper bound estimate (see above and global rebuttal point 1) clarified the reviewer's concern about its tractability. Let us know otherwise and we are happy to discuss it further. One #2, we strongly agree with improving the readability of the writing prior to publication, and have therefore made extensive revisions according to reviewers' suggestions (listed above and outlined in the global response). Please let us know if you have additional suggestions or how we can address your remaining concerns.
Summary: This paper proposes estimating the rate distortion function using Wesserstein gradient descent. Different from the classical Blahut-Arimoto algorithm in which the support points are fixed, the proposed method is able to learn the support of the optimal distribution. The authors prove finite-sample complexity bounds, and also conduct experiments demonstrating the superiority of the proposed method. Strengths: 1. By drawing interesting connections between the R-D problem and entropic optimal transport, this paper proposes a novel algorithm which is conceptually simple but effective. 2. The authors combine the advantage of Wasserstein gradient descent with BA and come up with a hybrid method that can update both the particle weights and the support of the distribution. 3. In the experiments section, the authors make a rigorous comparision of the proposed method with various existing methods, from the classic BA algorithm to modern neural network based approaches. Weaknesses: While this paper is technically solid, some parts of the paper is hard to read. For example $I(X;Y)$ in eq.(1) is not defined, and the boundeness assumption in Lemma 4.2 my need additional explanation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. As the authors mention in Sec. 4.2, the hybrid algorithm alternates between WGD and BA. While this seems like a heuristics-based method, would this hurt the convergence properties of WGD? 2. Lemma 4.2 only implies that WGD would converge to stationary points. Is it possible that under suitable conditions on the objective function, it will converge to minimizers? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. We have incorporated your suggestions to improve the writing. Below we will go over all points raised and detail how they have been addressed. > Weaknesses: > While this paper is technically solid, some parts of the paper is hard to read. For example I(X;Y) in eq.(1) is not defined, and the boundeness assumption in Lemma 4.2 my need additional explanation. $I(X;Y)$ stands for the mutual information of the joint distribution $P_X Q_{Y|X}$, which we now define after eq (1) in our updated manuscript. We have also clarified the boundedness assumption in Lemma 4.2, specifically the condition $\sup_t \int \| \nabla V_{\mathcal{L}}(\nu^{(t)})\|^2 \,d\nu^{(t)} \leq \infty$. We replaced “assume that” with “suppose that”, to make it clear that this is the hypothesis of the lemma. This condition is satisfied when, e.g., the cost function $\rho$ has a bounded derivative on the relevant domain, which is the case in our examples. > Questions: > As the authors mention in Sec. 4.2, the hybrid algorithm alternates between WGD and BA. While this seems like a heuristics-based method, would this hurt the convergence properties of WGD? This is an interesting question, and one we can only partly answer. The hybrid algorithm is motivated by the fact that the BA update is in some sense orthogonal to the Wasserstein gradient update, and can only monotonically improve the objective. While empirically we observe the BA steps to not hurt -- but rather accelerate -- the convergence of WGD (see Section 5.2), additional effort is required to theoretically guarantee convergence of the hybrid algorithm. There are two key properties we use for the convergence of WGD: 1) a certain monotonicity of the update steps (up to higher order terms, gradient descent improves the objective) and 2) stability of gradients across iterations. If we include the BA step, we find that 1) still holds, but 2) may a-priori be lost. Indeed, 1) holds since BA updates monotonically improve the objective. Using just 1), we can still obtain a Pareto convergence of the gradients for the hybrid algorithm, $\sum_{t=0}^\infty \gamma_t \int \|\nabla V_{\mathcal{L}}(\nu^{(t)})\|^2 \,d\nu^{(t)} < \infty$ (here $\nu^{(t)}$ are the outputs from the respective BA steps and $\gamma_t$ is the step size of the gradient steps). Without property 2), we cannot conclude $\int \|\nabla V_{\mathcal{L}}(\nu^{(t)})\|^2 \,d\nu^{(t)} \rightarrow 0$ for $t\rightarrow \infty$. We emphasize that in practice, it still appears that 2) holds even after including the BA step. Motivated by this analysis, an adjusted hybrid algorithm where, e.g., the BA update is rejected if it causes a drastic change in the Wasserstein gradient, could guarantee that 2) holds with little practical changes. From a different perspective, we also believe the hybrid algorithm may be tractable to study in relation to gradient flows in the Wasserstein-Fisher-Rao geometry (cf. Yan et al. 2023). We have added a discussion in Section 4.2, and leave further investigation to future work. > Lemma 4.2 only implies that WGD would converge to stationary points. Is it possible that under suitable conditions on the objective function, it will converge to minimizers? As our objective is non-convex over its domain (probability measures supported on at most $n$ points), we could only show convergence to stationary points. This is a very common limitation in the literature on gradient descent algorithms. It is in contrast to the discrete R-D problem on a fixed finite alphabet, which is convex, finite-dimensional, and solvable (in principle) by the BA algorithm. It is theoretically possible for our algorithm to get stuck at a non-minimizer, but we have yet to observe this on real-world problems. It is worth mentioning here that in the infinite particle limit, Yan et al. 2023 recently proved that Wasserstein-Fisher-Rao gradient flow for the BA functional can only converge to a global minimum (if it converges). This is important insofar as our hybrid algorithm could be interpreted as an implementation of gradient descent in this Wasserstein-Fisher-Rao geometry. We discuss this in the revised version at the end of Section 3. --- Rebuttal Comment 1.1: Title: Response Comment: I would like to thank the authors for their detailed response. I will keep my rating and still recommend accept.
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to review this manuscript and for providing insightful feedback, which has strengthened our submission. We appreciate that the reviewers recognized our proposed method as “principled” (**jdNJ**), “novel” (**SDqV, GB3T, 4bjw, RV2s**), and “technically solid” (**jdNJ, SDqV, GB3T, RV2s**), and found our numerical results convincing (**jdNJ, SDqV, 4bjw, RV2s**). Moreover, our sample complexity result based on entropic optimal transport (Prop 4.3) “is the first such result in this line of work” (**jdNJ**) and “important” (**GB3T**). The reviewers also pointed to some parts of our manuscript that were unclear, particularly the tractability of our upper bound estimate and the computational aspects of the Wasserstein gradient. We have taken the suggestions seriously and improved our manuscript accordingly. As we are not given the option to share our updated manuscript, we summarize the main improvements below: 1. [in response to **GB3T**] We have addressed the confusion around line 242 and made it clear that the upper bound from our method is in fact tractable. While the BA-like step (11) involves an integral w.r.t. the $\nu$ distribution, in our algorithm $\nu$ is represented as $n$ particles, so the integral reduces to a finite sum and this step is no more expensive than one update in the WGD algorithm. Our original writing on line 242 referred to the general setting which applied to both our method and NERD. Indeed, NERD uses a continuous $\nu$ and requires additional approximation for this computation (as explained in lines 244-248). 2. [in response to **GB3T**] We have extensively revised the discussion around WGD to make it clearer and more accessible. - In our updated version, we now preface the definition of the Wasserstein gradient with the intuition of gradient flow in the space of probability measures. - We added a new explanation that the formal definition (as the Euclidean gradient of the first variation of the loss functional) is the computational basis of the algorithm, whereas the linearization property of the Wasserstein gradient in eq 15 justifies its role as a bona fide gradient and is what allows us to prove convergence of WGD to a stationary point (Lemma 4.2). We added detailed references to [Ambrosio et al. 2008, Definition 10.1.1], [Chizat 2022, Lemma A.2] and [Carlier et al. 2022, Proposition 4.2] to precisely relate the definition and the linearization property in our setting. - On the computation of the Wasserstein gradient (WG): for the EOT functional, we now state clearly that its WG is computed by taking the Euclidean gradient of the output of the Sinkhorn algorithm. For the BA functional, the WG has an explicit formula (derived in Supplementary Material eq 23) which we now include in Section 4.1 of the main text. 3. [in response to **jdNJ, RV2s**] We expanded our references to include related work on the statistical complexity of EOT (Genevay et al. 2019, Riollet and Stromme 2022), non-parametric Gaussian mixture estimation (Yan et al., 2023), as well as recent papers in rate-distortion theory (Wu et al. 2022, Lei et al. 2023) which also note a connection between R-D and EOT. Wu et al. study the finite (and known) alphabet setting, similarly to the BA algorithm, and Lei et al. discuss the relation to optimal scalar quantization. By contrast, our work solves the computational problem of R-D estimation with Wasserstein gradient descent, with a focus on the continuous, high-dimensional, and stochastic optimization setting. Besides our methodological contribution, we also connect literature on related problems in statistics, information theory, and entropic optimal transport, and actually leverage the latter connection to characterize the sample complexity and approximation quality of our R-D estimator. 4. [in response to **4bjw**] To demonstrate the effectiveness and scalability of our method on more real-world data, we added a new experiment on the MNIST dataset. Again we obtain tighter R-D bounds with less computation than alternative methods. **Please see the attached PDF**. 5. [in response to **jdNJ**] One contribution to the R-D literature, which was not sufficiently emphasized in our first manuscript, is that we introduce a new, rich class of examples with known ground truth as a benchmark for algorithms. While the R-D literature usually uses Gaussian, Laplacian, or Bernoulli sources for that purpose, we leverage the connection with maximum likelihood deconvolution to find that a Gaussian convolution of *any* distribution can serve as a source with a known optimal reproduction distribution. While we already used this result to assess the optimality of various algorithms on the circular source in Section 5.2, we now discuss it more prominently in the updated manuscript. ---- **References** Lénaïc Chizat. Mean-field langevin dynamics: Exponential convergence and annealing. arXiv:2202.01009, 2022 Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows in metric spaces and in the space of probability measures. Guillaume Carlier, Lénaïc Chizat, and Maxime Laborde. Lipschitz continuity of the Schrödinger map in entropic optimal transport. arXiv:2210.00225, 2022 Aude Genevay, Lénaic Chizat, Francis Bach, Marco Cuturi, and Gabriel Peyré. Sample complexity of Sinkhorn divergences. AISTATS, 2019 Philippe Rigollet and Austin J Stromme. On the sample complexity of entropic optimal transport. arXiv:2206.13472, 2022 Yuling Yan, Kaizheng Wang, and Philippe Rigollet. Learning gaussian mixtures using the wasserstein-fisher-rao gradient flow. arXiv:2301.01766, 2023 Shitong Wu, Wenhao Ye, Hao Wu, Huihui Wu, Wenyi Zhang, and Bo Bai. A communication optimal transport approach to the computation of rate distortion functions. arXiv:2212.10098, 2022 Eric Lei, Hamed Hassani, and Shirin Saeedi Bidokhti. On a relation between the rate-distortion function and optimal transport. arXiv:2307.00246, 2023 Pdf: /pdf/18adb85a2f854b0d7a4f3a0f3a688efc43973e1f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an estimator using WGD and moving particles. This is different from prior methods that leverage neural networks to fit the unknown high-dimensional support of the optimal reconstruction distribution. The authors also note a connection with entropic OT and provide sample complexity bounds on estimating R(D). Numerical results show that it provides tighter estimates of R(D) compared to prior estimators on high-dimensional data. Strengths: - The method proposed is very principled, and well presented. The ideas are clear and the use of particles is natural for this problem. Showing how it can be implemented in practice was quite interesting. - The finite sample complexity result is nice, as it is the first such result in this line of work. Additionally, it shows one interesting use of the connection between R(D) and entropic OT. It would also be good to cite other recent work that discuss this connection (https://arxiv.org/pdf/2212.10098.pdf, https://arxiv.org/pdf/2307.00246.pdf). - The experimental results show good support of the method, with regards to the tightness of the bounds. Weaknesses: - While interesting, the work appears to be fairly incremental in terms of rate-distortion estimation. It provides slightly tighter bounds but fails to estimate at rates larger than log(n) which is a fundamental problem in rate-distortion (or mutual information) estimation (see https://arxiv.org/abs/1811.04251). Hence, it is difficult to say that this paper provides a wholly new solution to the rate-distortion estimation problem, as it seems to provide a third alternative to the same rate-distortion objective. In practice the prior work by Yang and Mandt is the only one so far applicable to most real-world data like images, where the rate per sample is far larger than log(n) for any feasible n. - The computational (specifically, memory) cost of the particle method was not discussed, and this should be mentioned, especially if each particle consists of high-dimension vectors. The benefit of the neural methods is that one does not need to maintain many high dimensional vectors. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Do the neural methods (NERD and Yang and Mandt) provide tighter bounds as the network complexity or architecture increases? If complex enough, could they be comparable with the particle method as well? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and bringing our attention to recent related work. We have followed your suggestion to cite (Wu et al. 2022, Lei et al. 2023), and added a new experiment on MNIST as well as more discussion around the computational cost of our method; also see points 2, 3, 5 of our main rebuttal. Below we will go over all concerns raised and detail how they have been addressed. > While interesting, the work appears to be fairly incremental in terms of rate-distortion estimation. It provides slightly tighter bounds but fails to estimate at rates larger than log(n) which is a fundamental problem in rate-distortion (or mutual information) estimation… We agree that the log(n) issue is fundamental to all sample-based estimators of mutual information (and R-D), including ours and NERD, and neural methods are more suitable in certain regimes, e.g., for smooth, high-dimensional distributions. On the other hand, our results demonstrate that directly optimizing over particles can be more effective, for instance in settings where the optimal reproduction distribution is concentrated on low dimensional manifolds and/or have many widely-separated modes. This gives our method several distinct advantages over neural approaches: 1. **Ease of use for practitioners.** Our method completely removes the need to specify neural net architectures, and involves only choosing 1). a learning rate schedule (which we automate with adaptive methods like Adam) and 2). the number of particles $n$, which controls the solution quality. In our experience, the neural methods also tend to be harder to train: NERD requires experimenting with $n$ to ensure accurate gradient estimation (see the failure case in the left panel of Fig. 1 of the main text), and numerical issues from deep neural density models (deep normalizing flows) can also cause the training of RD-VAE to diverge. 2. **Improved R-D bounds with lower computation complexity.** Without much tuning, our method consistently produces R-D bounds comparable with or better than NERD, as well as RD-VAE up to the rate limit of log(n). At the same approximation quality, our method also tends to be more memory and computationally efficient in our experiments. We give more insight into this in our next response on computation efficiency. Finally, we want to highlight another aspect of our contribution to R-D theory, which is the introduction of a new, rich class of examples with known ground truth as a benchmark for algorithms. We discuss it more prominently in our updated manuscript, as explained in point # 5 of our main rebuttal. > The computational (specifically, memory) cost of the particle method was not discussed, and this should be mentioned, especially if each particle consists of high-dimension vectors. The benefit of the neural methods is that one does not need to maintain many high dimensional vectors. We now added a discussion in Section 4 about the computational complexity of WGD vs. the neural methods. Although WGD maintains potentially high-dimensional vectors, its per-iteration complexity (both in computation and memory) is largely comparable to that of NERD given the same $n$; this is because in both methods, the main bottleneck is in computing a pairwise distortion matrix whose cost scales as $O(m n d)$. Depending on the size of the generator network, NERD can be computationally more expensive than WGD, as it additionally requires backpropagation through the network. RD-VAE also computes a distortion matrix but essentially with $n=1$; however its computation and memory requirements are dominated by that of training (typically large) neural density estimators. As different network architectures are used on different data sources (with varying levels of GPU support for operations such as convolution), it can be difficult to compare the computational complexity of WGD with the neural methods; nonetheless, we make a timing comparison in Fig 2 of the main rebuttal PDF, where the per-iteration time on MNIST is roughly 1:1.4:5 for WGD:RD-VAE:NERD. Finally, we remark that maintaining particles is inherently more efficient in the low distortion setting where the source consists of distinct atoms; in this regime, the neural methods also have to memorize the training data, but can only store it indirectly in the network parameters. > Do the neural methods (NERD and Yang and Mandt) provide tighter bounds as the network complexity or architecture increases? If complex enough, could they be comparable with the particle method as well? In theory, yes, all three methods (NERD, RD-VAE, and ours) converge to the true R-D in the ideal limit of infinite capacity / number of particles & samples. However, in practice it is a different question how efficiently each method estimates R-D given a computation budget. In our experience, increasing the network complexity for the neural methods can help produce tighter bounds, but this does not seem to happen consistently for various reasons. - On many experiments, the neural architectures we borrowed from the original papers appear to have been well optimized, and tweaking them yielded little or no gains (e.g., the DCGAN architecture for NERD on MNIST, or the hierarchical VAE for RD-VAE). - Increasing the network complexity may also make the training more difficult or even diverge, which we observed with RD-VAE. - In settings where the optimal reproduction distribution is concentrated on low dimensional manifolds and/or have many widely-separated modes, the neural approach is inherently less efficient because of the built-in bias of neural nets towards learning smooth mappings. The right-hand panel in Fig 1 shows one example where the optimal $\nu$ is concentrated on the unit circle and is recovered by WGD, while the neural methods produce $\nu$ smeared around the circle. By contrast, we observe consistently improving R-D estimates when increasing the number of particles in our approach. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, and recommend acceptance for the paper.
null
null
null
null
null
null
Fair, Polylog-Approximate Low-Cost Hierarchical Clustering
Accept (poster)
Summary: This paper studies the clustering problem in a fair setting. It proposes an approximation algorithm that achieves polylogarithmic factors for fair and cost while also keep relative balance. This work has greatly improved the result of Knittel et al. [2023] that has an approximation factor of $O(n^\delta\text{polylog}(n))$. The simulation results on two datasets verifies the effectiveness of their algorithm (replacing the binary clustering algorithm that theoretically achieves a factor of $O(\sqrt{\log n})$ with average-linkage). Strengths: (1) A theoretical result of an approximation algorithm for the fair clustering problem, with great improvement in approximation factor. (2) Simple and intuitive tree operations in clustering adjustment that can be theoretically analysed. Weaknesses: (1) This work seems to heavily depend on Knittel et al. [2023], although the authors are probably overlapping heavily. (2) Scalability of $h$, $k$ and $\lambda$ is doubtful. According to Theorem 1, $h$ should be much larger than $k^\lambda$, but in the experiments, $h=4$, $k=\lambda=2$ such that $h=k^\lambda$, which seems to be a violation of the settings. (3) There are some unclear points (refer to the questions). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) What is the role of $\gamma$ in Theorem 1? Do you mean an approximation algorithm for the cost of the input binary clustering tree (as described in Lemma 8) rather than the optimal cost of the original problem? So, for the latter, $\gamma$ needs to be multiplied outside the approximation factor, right? (2) How do you define the cluster balance? What does $c$ mean in the simulation settings? (3) Line 171, Def. 6, do you mean root$(T_f)$, rather than root$(T)$? (4) Line 236, do you mean "at most $h$" rather than "at least $h$"? (5) A typo: Line 281, "poitns" should be "points". (6) A ref error: Line 581. (7) Please delete Line 532 that has revealed author information! Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have addressed the limitations in a specific section (Appendix A). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *“This work seems to depend heavily on Knittel 2023…”* This work does build upon the algorithm of Knittel 2023, but greatly simplifies the procedure to be more easily implemented. Moreover, our analysis removes the extra $n^\delta$ approximation factor which allows for the first polylogarithmic approximation to cost with fairness. Since the best known unfair cost approximation is also polylogarithmic, this result is substantial in showing that a clustering can be made fair with only a modest increase in the approximation factor. In a sense the “price of fairness” is demonstrated to be low. *“Scalability of $h, k$, and $\lambda$ is doubtful…”* While the runtime may not be technically scalable in terms of these parameters, these parameters are much less than $n$. In consideration of this, the runtime is certainly scalable in $n$. Since $n \gg h \gg \lambda, k,$ this will certainly be $O(n^2 \log(n))$. Notice that for our Corollary, we actually assume the largest parameter is $h = O(\log n)$. Therefore, the runtime should be something like $O(n \log^2(n))$. *“What is the role of $\gamma$?”* As correctly noted, $\gamma$ is the approximation factor to cost incurred by the base unfair hierarchical clustering algorithm. And yes, gamma should be included in the approximation factor, we will correct this. To date, the best known such algorithm admits an $O(\sqrt{\log n})$ approximation factor. *“How do you define the cluster balance? What is c in the experiments?”* We apologize for the confusion in the presentation. The parameter noted as $c$ in the experiments is used to define the $\epsilon$ value and is not related to the notation for cluster balance in theoretical results. We will rectify this in the camera-ready version of our paper. We lastly thank the reviewer for identifying typos and other mistakes, which have now been corrected. --- Rebuttal Comment 1.1: Comment: Thank the authors' response. The technical difference from Knittel 2023 is still unclear for me, even though after reading the rebuttal to Reviewer gEkD. Sorry for not having enough time to check all the details. More intuitive comparisons to Knittel 2023 are required, especially in the case that many operations are the same. I stick to my score.
Summary: The paper addresses the problem of fair hierarchical clustering. Given a hierarchical clustering with a certain cost (determined by Dasgupta's cost function), the paper shows how the hierarchy (tree) can be efficiently modified so that fairness constraints are satisfied and the cost of the tree increases (provably) by only by a small factor. The theoretical results are validated through experiments on two data sets. Strengths: (a) The paper provides the first (provable) polylog approximation for the cost of hierarchical clustering while satisfying fairness requirements. This is an important advance in fair hierarchical clustering. (b) The algorithm simplifies a previously known algorithm while achieving the approximation bound. (c) The paper is written very well. Weaknesses: A (very) minor weakness is that the statements of theorems require a good amount of effort to understand since they involve many different parameters. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: (1) On lines 124--126 (page 4), it is mentioned that the approximation factor is with respect to an optimal vanilla hierarchy (which doesn't consider fairness). Did the previous work also use this assumption? (2) Are any hardness of approximation results known for the problem? (It will be interesting to see how close the polylog approximation is compared to what can't be achieved in polynomial time under a standard assumption such as P != NP.) (3) The statements of theorems/lemmas etc. involve many parameters. This reviewer agrees that this is unavoidable for making the theorems precise. Would it be possible to include informal statements of results that are easier to understand (even if they are not mathematically accurate)? Some minor typos: (a) Line 62 (page 2): "as a in" ---> "as an" (a) Line 121 (page 4): "increasing cost" ---> "increasing the cost" Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *“On lines 124--126 (page 4), it is mentioned that the approximation factor is with respect to an optimal vanilla hierarchy (which doesn't consider fairness). Did the previous work also use this assumption?”* Yes, the previous work also considers an approximation to an optimal, unfair, hierarchy, since it is currently not known how much worse an optimal fair hierarchy may be. One can simply view it as a lower bound on the optimal fair cost, however we note it in the results since it is a strictly stronger statement. It is interesting to note that the best known (unfair) cost approximation is also a power of log(n), specifically ½, so our result does not deviate too considerably from this. Though it would be nice to shoot for logarithmic or possibly sublogarithmic to narrow this gap. That's a good future direction. *“Are any hardness of approximation results known for the problem? (It will be interesting to see how close the polylog approximation is compared to what can't be achieved in polynomial time under a standard assumption such as P != NP.)”* The only known hardness results are for the cost optimization in the unfair setting (which is known to be APX hard). We acknowledge the interest in determining a lower bound on the approximation factor for fair clustering hardness. This, however, would require distinguishing the fair optimum to the unfair optimum. No previous work has managed to do that, in fact, all approximations for this problem (ours and previous) compared the output cost to the unfair optimum cost, which is a lower bound on the fair optimum cost. Differentiating between the two in any way would be a very useful result! The statements of theorems/lemmas etc. involve many parameters. This reviewer agrees that this is unavoidable for making the theorems precise. Would it be possible to include informal statements of results that are easier to understand (even if they are not mathematically accurate)?” We intend to incorporate the following theorem (sketches) in the main text to address these concerns and improve the overall readability of our results: *Given a $\gamma$-approximation to the cost objective for a hierarchical clustering on set of $n$ points, Algorithm 2 yields an $O(\log^2 n)$ approximation to cost which is fair in time $O(n \log^2 n)$. Moreover, all clusters are $O(1/\log n)$ balanced, ie. any cluster’s children clusters are within $O(1 / \log n)$ size different.* This result is highlighted in Corollary 1, but will be expanded upon with more intuition for the definitions before their formal definition in Section 2 to improve readability. --- Rebuttal Comment 1.1: Comment: I have checked the rebuttal. My questions have been answered in a satisfactory manner.
Summary: The paper considers finding a hierarchical clustering of (approximately) minimum Dasgupta cost under fairness and balance constraints. It appears that for any constant fairness range (i.e. the quantities a_i and b_i) they have a quasi-linear time algorithm which has a polylogarithmic approximation ratio. In fact - their algorithm takes any (unfair) hierarchical clustering and converts it into a fair one. Strengths: The best previous polynomial-time algorithm for approximately minimising the Dasgupta cost under fairness constriants had a polynomial approximation ratio (with exponent dependent on the fairness range). I think that going from a polynomial to polylogarithmic approximation ratio is a big step. The authors have conducted experiments on their algorithm - showing empirically that the cost of the fair clustering produced is not much greater than the cost of the original (unfair) clustering. However, there are no experiments comparing to other algorithms. Weaknesses: The algorithm is tailored to the Dasgupta cost which is only one measure of goodness of a hierarchical clustering and in itself can be seen as a heuristic. However - I believe this cost to be very famous so the result is significant. The way the bound is written at the moment is completely impenetrable - please see the “Questions” section for advice on this. There are issues with the writing of the paper which I shall now list (I include here any typos etc.): - The Introduction section uses a lot of notation that is not defined until later in the paper. This should definitely be fixed. - The authors state that the cost is hypothesised not to be O(1) approximatable. What they mean is not O(1) approximatable by a polynomial-time algorithm. - Definition 6 is ambiguous and hence is not a proper definition. - In line 60 I believe the authors mean “fairness literature” - In line 84 \gamma must surely appear in the approximation ratio - Line 103 should start with “which is” instead of “is” - In line 103 “tree” should be replaced by “graph” - The use of “i.e.” in line 123 is wrong. - Line 161 should say “inserts a new vertex p…” - In line 133 \ell(C) (which I assume is the number of elements of C of colour \ell) is not defined. This notation also overloads \ell - it is both a colour and a function (corresponding to the colour). I recommend something like N_{\ell}(C). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Suppose I give you a<1 and b>1 and I enforce you to create a fair hierarchical clustering with a_i=a and b_i=b for all colours i (where a_i and b_i are as in line 95). Can you write the parameters, the time complexity and the approximation ratio in terms of a and b? This is not just a question - doing this will vastly improve the presentation of the result since this is the logical way round (I give you the fairness constraints and you give me the (approximately) best hierarchical clustering satisfying those constraints). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Authors have addressed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *“The way the bound is written is completely impenetrable…”* We emphasize, as pointed out by reviewer XyRG, that for the given problem, our results require some in depth math to be fully accurate. To alleviate these issues, we will provide informal definitions and theorem statements in the response to reviewer XyRG. *“Suppose I give you $a<1$ and $b>1$ and I enforce you to create a fair hierarchical clustering with $a_i=a$ and $b_i=b$ for all colours $i$ (where $a_i$ and $b_i$ are as in line 95). Can you write the parameters, the time complexity and the approximation ratio in terms of $a$ and $b$?”* We note that $b_i$ should always be $\le 1$ since it’s the upper bound on the fractional representation of a color in a cluster (so if $b_i > 1$, it’s equivalent to $b_i = 1$, since that is the “no upper bound” case). We will answer the question assuming $0 < a < b < 1$, and all $a_i = a$ and $b_i = b$. In that case, yes you could derive a set of parameters needed to achieve this degree of fairness in terms of $a$ and $b$, and then plug them into the time complexity and approximation ratios. However, since there is a tradeoff between our parameters $\epsilon, h$, and $k$, and how they affect the degree of fairness, there will actually be a large range of parameterizations one can work with, so we cannot pose a straightforward answer. Realistically, this might be best done programmatically with parameter tuning. We thank the reviewer for the suggested revisions to improve readability of our main results. To more intuitively present the results, we intend to include the following informal theorem statement in the introduction (and defer the full parametrization to the analysis): *Given a $\gamma$-approximation to the cost objective for a hierarchical clustering on set of $n$ points, Algorithm 2 yields an $O(\log^2 n)$ approximation to cost which is relatively fair in time $O(n \log^2 n)$. Moreover, all clusters are $O(1/\log n)$ balanced, ie. any cluster’s children clusters are within $O(1 / \log n)$ size different.* Where we state that “relative fairness” simply implies looser constraints on the proportional representation of each color in each cluster. Lastly, we thank the reviewer for noting typos and grammatical errors, all of which have now been corrected. --- Rebuttal Comment 1.1: Comment: I have looked at the other reviews and the rebuttal and will not be changing my score. I very much like the result and I'm happy for this paper to be accepted - but if it is then it should be cleaned up (by incorporating the changes suggested by myself and the other reviewers) before publication. I would like to note that reading this paper inspired me to get into the field of fairness myself - thanks for that :-)
Summary: This paper considers fairness in hierarchical clustering and proposes an approximation algorithm that modifies a given unfair approximate vanilla hierarchy $T$, of which the cost is bounded by $\alpha$-factor of the optimal (OPT) hierarchy tree, i.e., $cost(T)\leq \alpha \cdot cost(OPT)$, to a polylogarithmic-approximate fair hierarchical clustering. This work is developed based on the ideas by Knittel et al. [2023] but simplifies some of the key operators, e.g., tree holding operator, and achieves the polylogarithmic approximation for cost for the first time. Strengths: - Proposed an algorithm that modifies an unfair hierarchical clustering and yields a fair hierarchy with only a modest increase in cost. - Provided real dataset experiments showing that their proposed algorithm improves the fairness in clustering (i.e., balances the portion of nodes from different parties) with a modest cost ratio increase. Weaknesses: - I think the presentation of this paper needs to be largely modified. First, this paper addresses some technical terminologies before there are defined, which makes it hard to understand what the statements mean before reading until the end of the paper. For example, in Introduction the last paragraph, “O(function(N)))-approximation”is used to compare previous methodologies. But, neither the definition of the cost in hierarchical clustering nor what this approximation means was provided. Theorem 1 in page 3 is an exact same copy of the same theorem in page 5. But just presenting the same theorem without even defining what approximation means and how $\alpha_i$ and $\beta_i$ are related to the quantification of fairness makes it very hard to understand the meaning of the paper in introduction. If the authors want to summarize the key contributions in introduction, they need to properly define the terminologies and important parameters to make it self-contained. - Another issue is related to the reference. The techniques in this paper is highly dependent on the previous techniques by Knittel et al [2023] and the authors refer the paper many times. Especially in Sec. 2.3 when tree operators are introduced, they refer the same operators introduced in Knittel. But it is hard to see what was the original way these operators were defined, what are the main changes this paper addresses, and why this change generates a better approximation. - In experiment Fig.4, the ratio of cost increase due to fair clustering is about 8x when n is about 10^3. Can we call this a modest cost increase? Intuitively, isn't it relatively easier to achieve the fairness for larger $n$ since there are many nodes and balancing those nodes without hurting the cost can be easier due to much more ways to modify the tree? Can the authors compare their result with other algorithms in terms of achievable fairness and cost increase? There is no baseline comparison in the experiment. Also, it is not clear whether the performance of the proposed algorithm is robust to the change of the original hierarchical clustering algorithm. In the experiment, only the average-linkage was considered as an initial algorithm. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Can the authors provide experiments to compare their performance with other baselines? - Can the authors show that the performance of proposed algorithm is robust to the change of the originally given (unfair) hierarchical clustering? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Limitations are not well addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *“I think the presentation of the paper needs to be largely modified…”* We will revise the paper for the final draft per the suggestions of all reviewers. Notably, we will guarantee that all formal theorems and definitions have a high level intuitive explanation in the introduction to alleviate the mentioned confusions. We furthermore refer the reviewer to the response of reviewer XyRG, wherein they mention that such “impenetrable” theorem statements cannot be avoided for this problem. To alleviate this, we provide some informal theorem statements and terminology definitions which will be included in the main text to improve readability (see our response to XyRG). We additionally refer to the response to reviewer qPF4’s question to better understand the results. *“What are the main changes this paper presents [as compared to Knittel 2023]...”* Our work builds upon that of Knittel 2023 in two ways. First, we reduce the number of tree operations used from 4 to 2, where the more complicated operation, tree folding, is vastly simplified. Second, in contrast to Knittel 2023 who applied one operator to the entire hierarchy at a time, we alternate applying one operator and the other. This reduces the number of top-bottom passes of the algorithm from 4 to 1 and avoids an exponential (in $O(\delta \log n)$) increase in the cost. This reason is somewhat complex and is a main technical contribution. Moreover, our analysis is considerably more readable and provides the much desired polylogarithmic cost approximation without the additional n^\delta term of the former result. *“Can we call this a modest cost increase… isn’t it easier to achieve the fairness for larger $n$…?”* Given the hardness of the vanilla problem (i.e., it’s APX-hard and the best efficient approximation is $O(\sqrt{\log n})$), and the increased complication posed by the fairness constraint, we would expect the cost to necessarily be increased by a large factor in the worst case (see the response to XyRG). We say “modest” because the cost increase is very small compared to $n$, which reflects the theory that it is at worst polylogarithmic in $n$. While larger n may in some ways give us more leeway to combine clusters, it is extremely difficult to characterize the most adversarial examples, which is an open question. Notice that our algorithms are agnostic to the degree of color representation imbalance in clusters - as in, the only time it considers colors at all is when it orders cluster by their relative representations of a fixed color. More involved, complicated algorithms will likely be needed to leverage the large cluster sizes to carefully handle these imbalances. *“Compare to other baselines…”* This is a novel problem setting and the only known algorithms for such a fairness guarantee are the present work and the cited Knittel 2023 paper. Here, we provide some experimental results contrasting the two algorithms' cost returns after producing a fair clustering for full comparison. The table below compares the relative (to unfair clustering) cost of our algorithm as compared to the baseline algorithm of Knittel 2023 (with the parameter of $\delta$ set to ¼): | n | Baseline | Our Algorithm | | -- | --------- | ---------------- | | 128 | 3.65240727 | 1.08586718 | | 256 | 5.68329859 | 1.42082465 | | 512 | 12.30571685 | 2.5869583 | | 1024 | 25.17125835 | 6.6745378 | | 2048 | 52.93911771 | 7.86944693 | As you can see, since our algorithm is only polylogarithmic in $n$, it naturally scales much better with the input sample size. This further highlights the impact of our work and will be included in the final version of our paper. *“Robust to changes in the original hierarchical clustering?”* We thank the reviewer for noting that it would be interesting to study the fair clustering problem in a dynamic setting wherein an algorithm can handle updates to the underlying structure of the clustering. This is an important problem but outside the scope of the present work. Though, with some speculation, it may be possible to add a small amount of randomness to our algorithm (e.g., whenever we compare two cluster sizes) to achieve robustness to local changes while still achieving only marginally worse approximations and degrees of fairness. --- Rebuttal Comment 1.1: Comment: Dear reviewer, We hope this message finds you well. We wanted to check if you’ve had a chance to review our rebuttal for the paper. If you have any further questions or if there’s anything you’d like to discuss, please feel free to comment further. We are eager to address any concerns you might have. Thank you for your time and consideration.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Characterization and Learning of Causal Graphs with Small Conditioning Sets
Accept (poster)
Summary: Besides computational complexity, the weak spot of constraint-based causal discovery is the assumption of having an independence oracle. Especially with large conditioning sets, independence testing is difficult. This paper aims to address this limitation by restricting the number of conditioning variables and provides a sound framework that allows us to study the guarantees one can obtain for constraint-based structure learning if one restricts the size of the conditioning sets. The authors show that the results can contain bi-directed edges and relate the obtained graph to maximal ancestral graphs. They further provide a sound algorithm to learn such graphs and evaluate it on a few synthetic and semi-synthetic examples. The theoretical contributions of the paper are interesting and presented clearly to most parts. The main limitation of the work are the points that I mention with respect to the empirical evaluation. Strengths: - Rigorous definition of assumptions, goal, and solution that can be obtained under the concept of restricting the size of the conditioning set. - The new formulation of k-closed graphs was introduced well and discussed in the context of previous notions (ancestral graphs). Further, Markov equivalence and faithfulness are adapted for the introduced graphs. - The authors propose a learning algorithm k-PC to learn the Markov equivalence class of the introduced k-closure of the true DAG. - The proposed algorithm has been empirically evaluated to PC, LOCI and NOTEARS, and the code has been provided for reproducibility. Weaknesses: - The performance gain seems to be only evident in small sample regimes. - The new approach does not clearly outperform LOCI & it would be interesting to see how PC performs when restricting the conditioning set, as well. - It would be interesting to study a real-world example, or DAGs with non-linear relations. I could imagine that on the latter, using a smaller conditioning set can be a more substantial advantage, as independence testing with large conditioning sets is especially difficult for arbitrary functional relationships. For more details on the problem and maybe a good source to motivate the proposed algorithm please consider Shah and Peters (2020) Shah, Peters, The Hardness of Conditional Independence Testing and the Generalised Covariance Measure, The Annals of Statistics, 2020 Minor (suggestions): - For easier comparison, it might be beneficial to compute the area under the curve for the metrics shown in Figure 3. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - What is the intuition of step 5 in the algorithm? Why do we have to exclude it for Corollary 4.2? Not much intuition is provided regarding this step, and it just occurs in Corollary 4.2. - I understand that if we just run the PC algorithm and restrict the conditioning set it does not inherit the same guarantees as PC-k, e.g., it would not find any bi-directed edges. What would be the other differences/failure cases of this naive approach? - Assume one naively restricts the maximum size of the conditioning set in the PC algorithm, which I think is an option in the pcalg package, for example. How does this version of PC perform if we restrict the conditioning size in a similar manner as for the proposed method (e.g., for the experiments shown in Fig. 3 & 4)? - How does the computational complexity or empirical runtime compare to PC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Standard limitations occurring in constraint-based causal discovery (a variant of faithfulness, independence oracle). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed comments and suggestions. Please see below for our replies. **1.** "performance gain seems to be only evident in small sample regimes"\ Note that showing our method outperforms the existing approaches in the small sample regime was the main goal of our experiments. When the sample size is large, k-PC will only be less informative than PC since high-degree CI tests become more reliable, and they carry information about the underlying causal structure. **2.** "what if we run PC, and early terminate? Any failure cases?"\ This is a great question. Note that running PC only for conditioning sets of size at most k will lead to wrong causal conclusions. To see this, consider the graph in Figure 9 (in appendix), reproduced below for convenience: X->Y<-Z->U<-V Let k=0. Modifying PC to only condition on the empty set leaves in the spurious edge Y-U in the skeleton phase. Applying orientation rules afterwards, we can get Y<-U or Y->U depending on which unshielded collider orientation phase is conducted, either X->Y<-U or Y->Z<-T. Both of these would be wrong. Thus, basic PC is not sound if we simply do not condition on certain subsets. Alternative versions such as stable PC might detect these incompatibilities, but they don't have a systematic way to handle them except not orienting such edges. Our algorithm orients as bidirected, which allows inferring them as spurious edges. Since PC with this kind of early stopping is not sound even in the infinite sample regime, we have not included it in our experiments. **3.** "DAGs with nonlinear relations?"\ Apologies for not being clear on this: Please note that most of our experiments are with non-parametric relations, i.e., with discrete variables with randomly chosen conditional probability tables (uniformly from the probability simplex). All discrete variables are binary since the results did not change with larger support and we were able to scale graphs easier with binary nodes. Note that we use linear models only while comparing our method against NOTEARS, whose baseline version uses the linearity assumption. We will clarify this in the manuscript. **4.** "Shah and Peters (2020)"\ Thank you for this pointer! We will add this citation to help motivate our work from the hardness of independence testing. **5.** "Corollary 4.2 and Step 5 of Alg."\ Corollary 4.2 says that we can learn as much as the PAG of our representation. And this is sound and complete. However, to learn more about the representation (k-essential graph), we need to have more orientation rules. These arise since our k-essential graph is not any ancestral graph, but has more structure. In other words, there is more to be learned beyond PAG. The rules in Step 5 allow us to learn more about the k-essential graph beyond the FCI orientation rules. We show that the overall algorithm with these rules is sound in Theorem 4.4. For a discussion on the challenges of completeness for learning the k-essential graph, please see the discussion in Section E.2 (appendix). **6.** "Computational complexity vs PC"\ The complexity of the learning algorithm will be similar to an early-stopped version of FCI, called Anytime FCI by Spirtes 2001. Although this is complicated and would depend on other parameters in the graph, such as primitive inducing paths, and thus the number and location of unobserved confounders, we can also roughly bound this by $\mathcal{O}(n^{k+2})$ since for any pair, we will not be searching for separating sets beyond the $\mathcal{O}(n^k)$ subsets of size at most $k$. Further runtime improvements, such as RFCI by Colombo et al. 2012 might be possible, but this requires a further understanding of the structure of k-closure graphs, which might be an interesting future direction. PC on the other hand runs roughly in time $\mathcal{O}(n^d)$ where $d$ is the degree of the graph. Thus, for sparse graphs and small $k$, we can expect a similar runtime. We will add this discussion with the relevant citations below to the manuscript. [Spirtes] "An Anytime Algorithm for Causal Inference," 2001. [Colombo, Maathuis, Kalisch and Richardson] "Learning High-Dimensional Directed Acyclic Graphs With Latent And Selection Variables," 2012. Thank you very much for your feedback and valuable comments! We hope that our responses will give the reviewer more confidence in their acceptance recommendation and that they would consider increasing their score. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clarifications, which could alleviate some of my confusion regarding the experiments. I will increase my score to “accept”.
Summary: A new structure learning approach is proposed in limited data settings. In these settings, existing constraint-based causal discovery approaches struggle with statistical tests of independence. The main insight is that we can define an equivalence class of graphs based on an upper bound on the size of the conditioning set. Using this, a new learning algorithm is presented to learn the equivalence class. Strengths: The paper is well-motivated and clear in its contributions. The problem of causal discovery from limited datasets seems to be a significant one. The novel contributions include introducing the formalism of K-Markov equivalence which can then be used to derive a learning algorithm. The soundness of the algorithm is guaranteed and the completeness is shown in some cases. Overall the theoretical contributions seem strong in this work. Weaknesses: On the empirical side, comparisons are performed with state of the art structure learners on synthetic and semi-synthetic datasets. Since the motivation of the work was causal discovery from limited data, using some more real-world cases could have made the experiments much stronger. Overall, perhaps the empirical evaluation is not as strong as the theoretical contributions of the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are there real-world examples of learning from limited conditioning sets that could perhaps be used to strengthen the experimental evaluation? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: There are no limitations that are explicitly mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and feedback. "real-world examples of learning from limited conditioning sets"\ We share the reviewer's sentiment that more experiments with real-world datasets would make the paper stronger. bnlearn repository is the main resource that causality researchers typically use, which is what we used as well. Note that the lack of datasets with ground truth causal structures is a common struggle with causality research, which makes the theoretical groundings of any proposed solution ever more important than some other fields. We hope our work to initiate a line of research and that the follow-up work could apply our methods to more datasets as these become available. We will make our code available upon publication to help speed up this process. A related practical setting where it is naturally difficult to condition on certain subsets of variables is the following: Consider a causal graph where some variables are high-dimensional, such as images, while others are binary such as labels or classifier outputs. It is very difficult to run reliable conditional independence tests with images in the conditioning set in practice. What can we learn about the causal structure then? An important future work is to extend our results to when an arbitrary set of CI tests cannot be performed. We hope that our work can motivate research in this direction. Once again, thank you for your positive comments! We would be happy to discuss these points further. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response. I can understand the difficulty with "real-world" causal structures, but the availability of code should help follow-on work.
Summary: The authors study the problem of causal learning with bounded conditioning sets. It is an important question that up to what set of equivalence graphs a learner can infer a causal graph when it can only perform conditional independence tests with limited size of the conditioning sets. They characterize the equivalence set and propose a sound constraint-based algorithm for learning it. Strengths: The authors study an important research question. The paper is well-organized. The results are quite relevant and important. It nicely establishes the connections between the existing results about Markov equivalence class and newly define k- Markov equivalence class that is the set of causal graphs that have the same conditional independences with conditioning sets at most k. This results can be a foundation for further developments in causal graph learning. Weaknesses: A discussion about the complexity of the proposed algorithm will improve the paper. For instance, what is the complexity of checking whether two given DAGs are k-Markov equivalent? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive comments and we are happy to hear that they found our results relevant and important. "Complexity of checking k-Markov equivalence." This is a good question. Currently, one needs to explicitly construct k-closure graphs given the two DAGs and check whether they are Markov equivalent. The step to make two variables adjacent requires one to loop through all conditioning sets of size at most $k$. This would take $\mathcal{O}(n^k)$. This is the main time-consuming step. Afterward, one can test Markov equivalence using the existing approaches. For example, Hu and Evans 2020 show that one can do this in $\mathcal{O}(ne^2+n^2e)$ time, where $e$ is the number of edges. Thus, the overall algorithm will indeed be polynomial-time when $k=\mathcal{O}(1)$. The complexity of the learning algorithm will be similar to an early-stopped version of FCI, called Anytime FCI by Spirtes 2001. Although this is more complicated and would depend on other parameters in the graph, such as primitive inducing paths, and thus the number and location of unobserved confounders, we can also roughly bound this by $\mathcal{O}(n^{k+2})$ since for any pair, we will not be searching for separating sets beyond the $\mathcal{O}(n^k)$ subsets of size at most $k$. Further runtime improvements, such as RFCI by Colombo et al. 2012 might be possible, but this requires a further understanding of the structure of k-closure graphs, which might be an interesting future direction. We will add this discussion with the relevant citations below to the manuscript. Thank you for your time! [Hu and Evans] Zhongyi Hu, Robin Evans, "Faster algorithms for Markov equivalence," 2020. [Spirtes] "An Anytime Algorithm for Causal Inference," 2001. [Colombo, Maathuis, Kalisch and Richardson] "Learning High-Dimensional Directed Acyclic Graphs With Latent And Selection Variables," 2012.
Summary: While I have the background to understand this paper, I'm afraid I am not well positioned to judge its novelty and significance in the causal discovery subfield. In short, the PC algorithm struggle when conditioning sets are large, so the authors propose a modification of Markov equivalence for bounded conditioning sets, then graphically characterize this relationship, and then propose the k-PC algorithm that is more successful in synthetic small small regimes. Nothing in the math or presentation seems incorrect to me, though I did not closely examine the more technical parts of the paper. The experiments are on synthetic data, which is typical in this subfield, but that makes it difficult to know whether k-PC has practical advantages over PC in real-world cases, as there are likely regimes where PC is superior. Overall, this seems like competent well done work, but I lean towards rejection for this paper. The modification of PC and Markov equivalence for bounded conditioning sets seems like a straightforward and simple modification of existing definitions, so the contribution does not seem novel and significant to me. If there were convincing empirical results that showed this change makes causal discovery viable in new real-world contexts I might be convinced, but as it stands its unclear to me whether the synthetic experiments are good evidence of this, especially with how little prose was in the main text describing them. This might be more appropriate for a conference or workshop that is more narrowly focused on causality. I'm perfectly happy to be convinced otherwise, or have my opinion ignored in favor of reviewers/metareviewers with a better understanding of the subfield! Strengths: See main Weaknesses: See main Technical Quality: 3 good Clarity: 3 good Questions for Authors: See main Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See main Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your feedback and insights. Our responses are below. "this seems like competent well done work" Thank you! We appreciate this. "The modification of PC and Markov equivalence for bounded conditioning sets seems like a straightforward and simple modification of existing definitions, so the contribution does not seem novel and significant to me." We respectfully disagree with this assessment. As we stated in the paper, by giving up conditioning on a set of variables, we no longer have the classic Markov equivalence condition. In fact, it is not a priori clear if there even is a simple graphical Markov equivalence condition when we restrict the size of conditioning sets. Our results show that one can do this by using ancestral graph machinery - which is typically used for handling latent confounders - in the context of characterizing k-Markov equivalence class for systems without latents. This, in our opinion, is a surprising finding. The comparison with LOCI shows the power this representation has over the existing work addressing this problem. In terms of novelty, the key lemmas are all derived from first principles, and as far as we are aware of, the k-Markov equivalence characterization is not implied by any of the existing literature. In terms of significance, our goal was to demonstrate that the proposed approach could lead to more robust causal discovery (various F1 scores are used in the experiments) in the small sample regime where large CI tests become unreliable. We believe that rendering causal discovery more reliable in practice is a significant goal and hope that our contribution takes us one step closer to this goal. We hope that this re-iteration of our contributions helps convince the reviewer that our results are novel and significant and that the reviewer would reconsider their evaluation in light of this. If you have follow-up questions, please let us know. We would be happy to engage further. Thank you once again for your time and feedback! --- Rebuttal Comment 1.1: Comment: Thanks for the explanation! In the end I'm happy to see this accepted
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims to address the problem that in constraint-based causal discovery based on conditional independence tests, the tests become statistically unreliable with limited data when the conditioning sets are large. The proposed solution is to use only conditional independence tests for relatively small conditioning sets. To facilitate this goal, this paper provides a characterisation of k-Markov equivalence between causal DAGs, in the sense of entailing the same conditional independence statements in which the conditioning set has a cardinality not exceeding k. Since the characterisation is in terms of maximal ancestral graphs, a FCI-like algorithm is proposed to learn such equivalence classes and is empirically evaluated against some benchmarks. Strengths: This paper is well written and very readable. The technical results are sound and the empirical evaluation is reasonable. Weaknesses: Unfortunately, as far as I can tell, this paper largely reproduces the results of an old paper by Peter Spirtes (2001), "An Anytime Algorithm for Causal Inference", Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics. It seems to me that all the main results and insights of this paper were already presented in Spirtes's paper. I may have missed something important. I will increase my score if the authors can convince me that their contributions remain sufficiently original and significant given Spirtes's paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are the main results of this paper a reproduction of Spirtes's (2001) results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to first thank the reviewer for pointing out such a relevant paper to us. Please allow us to compare our work with Spirtes (2001). The short answer to your question "Are the main results of this paper a reproduction of Spirtes's (2001) results?" is no, they are not a reproduction of Spirtes's results. We elaborate below: First, Spirtes 2001 says: You can halt FCI after running it until any conditioning set of size k. The result will still be a correct PAG, but a potentially less informative one. There are at least three main algorithmic differences: 1. We are not trying to learn a PAG. Our goal post is different. Namely, we don't actually have any latents in the system, nor any selection bias. We define the equivalence class to be learned by the algorithm a priori, knowing that we won't be conditioning on sets of size greater than k given a DAG. 2. The undirected edges in our representation do not represent selection bias. We use them to represent a different graph union operation. The lack of latents gives us this flexibility, which allows us to distinguish different sets of graphs by using both - and o-o edge marks for this. Please see Definition 3.15 for how each edge in our k-essential graph represents something specific about our equivalence class of graphs without latents. This machinery allows us to develop a finer equivalence class representation than LOCI [20]. 3. Even if we simply compare the two algorithms ignoring this context, since our goal post is different, we have additional orientation rules that do not appear in FCI. Please see the new rules R11 and R12 in Algorithm 1. This again is because our goal post is different from learning a PAG of the true causal graph. Furthermore, we modified the existing FCI rules since we do not have undirected edges that represent selection bias, and therefore our undirected edges are treated as if they are circle edges. This can be seen in lines 1064-1072 in the supplementary material. In terms of the contributions of the paper, we also have a new representation to capture all the d-separation statements of bounded conditioning set size. Importantly, we use these to give necessary and sufficient graphical k-Markov equivalence conditions between two causal DAGs. These are not implied by the results of Spirtes 2001, which focuses on the soundness of early stopping of FCI in relation to the true PAG. We hope that this convinces the reviewer that the results of our paper are not simply reproductions of Spirtes 2001. It goes without saying that we will cite this paper and thanks to your input, we will be able to put our paper's contributions in better and clearer context in relation to the existing causal discovery literature. Thank you once again for your valuable feedback. --- Rebuttal Comment 1.1: Title: Score increased Comment: Thanks for the helpful response to my question. I still think the results are fairly easy given what is already shown by Spirtes, but I agree that some of them are not simply corollaries of Spirtes's results. I have increased my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply and for your reconsideration. Based on your remark, in addition to citing and explaining this work in the main paper, we will also add a dedicated subsection in the Appendix to more closely compare the two papers. Specifically, for the causally sufficient case, we will add causal graphs to showcase how much causal knowledge can be extracted by our algorithm compared to Anytime FCI. We explain one such difference below: One example we may use for this comparison is the graph in Figure 2. Here, our algorithm will learn the representation on the right: We can infer that either $a$ causes $d$ or $d$ causes $a$, noted by the undirected edge $a$ --- $d$ in our representation, rather than the circle edge $a$ o---o $d$, which would allow $a$, $d$ to be non-adjacent in some DAGs. Anytime FCI here will instead output circle edges between $a$ and $d$, i.e., $a$ o--o $d$. Even if we try to incorporate the knowledge that the underlying graph is causally sufficient, it is not obvious how one can conclude that $a$ and $d$ must be adjacent in *every DAG that induces the same degree-$k$ d-separation statements* from the Anytime FCI output. One way to do this would be to remove the edge from this PAG and check whether $a$, $d$ can now be d-separated by some conditioning set of size at most $k$. If yes, this changes the equivalence class, which means the edge must be there in any DAG. Our local rule R12 can instead orient this as an undirected edge, signifying that $a$ and $d$ must be adjacent in every underlying DAG by leveraging our equivalence class representation, without having the run this type of post-processing which might be computationally intensive for larger graphs. We once again thank the reviewer for pointing us to Spirtes 2001 and enabling this discussion.
null
null
null
null
null
null
Sequential Subset Matching for Dataset Distillation
Accept (poster)
Summary: This paper proposes a novel dataset distillation method called SeqMatch which focuses on extracting high-level features from later training trajectories. The authors highlight a limitation in state-of-the-art data distillation methods, which tend to condense low-level information from easy data while overlooking the high-level information contained in hard data. In response to this issue, the paper introduces a novel optimization technique that generates multiple small sets of synthetic data. Each of these sets distills distinct knowledge from various stages of the training trajectories. By addressing the inherent problem observed in previous dataset distillation methods, the authors conduct experiments to showcase the efficacy of SeqMatch. Strengths: 1. The idea of condensing different sub-datasets for different stages is novel and interesting. 2. The proposed method does not require extra computation cost. 3. The writing is good and easy to follow. Weaknesses: - The paper lacks evaluation numbers for certain settings, such as SeqMatch-IDC, on datasets like CFAIR100 50IPC, Tiny-ImageNet, and ImageNet subset. (SeqMatch-IDC seems to be the most favorable setting) This omission makes it difficult to determine the performance of SeqMatch in comparison to other methods in these specific scenarios. - SeqMatch underperforms the baseline method FTD on Tiny-ImageNet and ImageNet subset, suggesting that SeqMatch may not scale well to larger datasets. This raises the question of why SeqMatch was chosen over FTD in these cases. - The paper lacks an ablation study, which would provide valuable insights into the impact of the number of subsets, K, on the performance of SeqMatch. Including such an evaluation would enhance our understanding of how SeqMatch operates and how different parameter settings influence its performance. - The caption of Table 1 is somewhat misleading. Although IDC is not categorized as a factorization-based method, it does employ data parameterization (factorization can be treated as a special data parameterization). Therefore, it would be more appropriate to compare IDC with RTP and HaBa rather than other methods distilling information into a single image. - Minor: There are citation inconsistencies in the appendix. For example, FTD is referred to as [12] in the appendix but as [11] in the main text. Additionally, the appendix lacks a reference section, making it difficult to trace the sources of the cited works accurately. - The reproductivity checklist is chosen as Yes, but no code is provided. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How is the repeated evaluation conducted to calculate the error bar? The motivation of the paper is that SOTA methods fail to distill high-level information from hard data. From my perspective, a more natural and straightforward solution would be to divide the original training dataset into different subsets based on data difficulty and then distill distinct datasets based on these subsets. I am wondering if the authors explored this approach or considered it as an alternative solution in their research. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments and suggestions. We answer your questions in order. **Q1:** The paper lacks evaluation numbers for certain settings. **A1:** This is attributed to the notably sluggish training speed observed in the IDC[21] framework. Our experimental setup aligns with the configurations outlined in IDC[21], and DREAM[51], all of which exclusively present results of IDC under ipc=10, CIFAR 100. Unlike other gradient-matching baselines, IDC[21] employs an additional class-wise loop asthe complex three-level nested loop structure. This nesting, results in slow training speed, which takes up tp 14 days on a single Nvidia V100 GPU (ipc=50 CIFAR 100). Addressing your suggestion, we have initiated the training of SeqMatch-IDC with ipc=50, CIFAR100, and we are committed to incorporating the updated results. **Q2:** underperforms the FTD on Tiny-ImageNet and ImageNet subset. **A2:** As asserted in lines 77-78, our SeqMatch serves as a training strategy seamlessly integrable into widely used dataset distillation frameworks. The central thrust of our contribution lies in uncovering a general, yet often unnoticed, limitation within existing distillation methods, wherein each synthetic dataset tends to encapsulate homogeneous features. Concomitantly, we propose an innovative and effective technique to mitigate this particular challenge. FTD[11] outperforms our base distillation method MTT[5] significantly on Tiny-ImageNet and ImageNet subsets, thereby diminishing the performance improvement attributed to SeqMatch-MTT. In light of this observation, we have initiated experiments utilizing FTD[11] as our foundational distillation approach and are committed to incorporating the updated results into the forthcoming version of our paper. **Q3:** The paper lacks an ablation study. **A3:** We appeciate your feedback and have conducted the experiments on the ablation study of K on the number of subsets K on CIFAR10 with SeqMatch-MTT. The results are listed below, | K |1|2|3|4|5| |:----------- | :-----------: |:-----------: |:-----------:|:-----------: |:-----------:| |ipc=10|65.3|66.2|65.6|65.0|63.5| |ipc=50|71.6|73.2|74.4|74.1|74.3| Additionally, we present the scatter plot, denoted as **Figure 5** in the provided response PDF, which illustrates the findings from the ablation study conducted on the parameter K. The outcomes distinctly indicate a decline in performance as K escalates from 2 to 5 when ipc=10. A plausible rationale for this phenomenon lies in the fact that subsets with insufficient images (ipc < 5) struggle to effectively distill comprehensive knowledge from the target dataset. Conversely, the degradation in performance as K increases remains marginal when ipc=50. This can be attributed to the subset size surpassing the threshold required to adequately capture essential features (ipc > 10), thereby substantiating the observed trend. **Q4:** The caption of Table 1, citation inconsistencies, and the reproductivity checklist. **A4:** We extend our gratitude for the meticulous review and extend our apologies for the errors identified. Acknowledging the oversight, we concede that the baseline IDC[21] should indeed be classified as a factorization-based method. Consequently, we will diligently revise the presentation in Table 1 within our manuscript. We are committed to modifying the citation indexing and ensuring proper reference inclusion within both the present and forthcoming iterations of our work. With regard to the availability of our code, we intend to make it publicly accessible upon acceptance of our submission. **Q5:** How to calculate the error bar? **A5:** As asserted in lines 283, each network is initialized **5 times** with different random seed to evaluate the synthetic datasets. We follow the experimental setup outlined in the baselines MTT[5] and IDC[21]. We report the mean and variation of the accuracies. **Q6:** divide the original training dataset based on data difficulty. **A6:** We conducted experiments wherein the original dataset was partitioned into three equal subsets termed "Easy," "Medium," and "Hard," respectively. The sorting criterion is based on the instance-wise average loss reduction observed during standard training, following[14]. Subsequently, the synthetic dataset underwent division into three subsets, each earmarked for distillation from the corresponding "Easy," "Medium," and "Hard" target subsets. Post distillation, we proceeded to assess the synthetic dataset through two distinct approaches: (a) The neural network was trained using the synthetic dataset in the order of "Easy," "Medium," and "Hard", which is marked as "Sequential" in the following table. (b) The neural network was trained utilizing the entirety of the synthetic dataset, which is marked as "Mixed" in the following table. We report the results with the reference of our SeqMatch below: ||Easy |Medium |Hard |Entire| |:-:| :-: |:-: |:-:|:-: | |Sequential|61.7|65.3|65.9|65.9| |Mixed|-|-|-|65.5| |SeqMatch|67.9|70.8|74.4|74.4| The results highlight a significant performance gap between the Divide-and-Conquer approach and our proposed SeqMatch. This discrepancy arises because feature extraction in standard training originates from mini-batches drawn from the entire dataset. The gradients, considered as the teacher trajectories in dataset distillation, constitute a mixture derived from both "Easy" and "Hard" instances. Despite "Easy" instances with marginal training loss contributing a lower proportion of gradients during subsequent training, these instances play a pivotal role in stabilizing the optimization direction once the network becomes overfitted to the "Hard" instances. Consequently, dividing the original training dataset along the instance dimension is not a more favorable choice compared to division within the "Epoch" dimension. The latter approach is the methodology implemented in SeqMatch. --- Rebuttal Comment 1.1: Title: A request for reviewing the rebuttal Comment: Dear reviewer xCSw, Thank you for your time and dedication to the review process. We are writing to kindly request your response to the rebuttal we submitted in response to your valuable feedback. We understand that reviewers have busy schedules. However, we are eagerly anticipating your constructive suggestions to ensure the effective resolution of any misunderstandings. With only 4 days remaining in the discussion period, we are concerned that there may not be sufficient time to further clarify any misconceptions in our submission. If you could spare a moment to review our rebuttal and share your thoughts, it would immensely help us enhance the quality of the submission. We kindly ask for your comments on any remaining concerns you may have about our submission. Your assistance in this matter would be greatly appreciated. We truly value your input and eagerly look forward to your response at your earliest convenience. Best The authors
Summary: This work proposes a change to existing dataset distillation methods by sequentially optimizing different subsets at a time. At each iteration, the existing subset is frozen and a new subset of data is *added* to it and optimized. This method allows different subsets of the synthetic data to capture different levels of features required by a network to learn during training time. This method boosts the state of the art for subsets with >1 IPC (since this method does not make sense with 1 IPC). Strengths: I like this paper a lot. The authors addressed an obvious problem with existing dataset algorithms (all samples capture the same level of features) in an elegant way that is adaptable to all existing and future backbone methods. The authors both empirically and theoretically show that jointly optimizing the entire set couples the gradients in a way that prevents the synthetic set from learning the necessary variety of features. The visuals are all very nice, clearly illustrating the authors' points. Weaknesses: Algorithm 1 is a bit confusing to read. According to line 4, it seems that a single network initialization is used to optimize the entire subset (but this can't be true since it would catastrophically overfit). It is also unclear what the $n$ parameter is. As the algorithm is currently presented, I cannot see how MTT can be slotted into it. Maybe it would be clearer if an additional inner loop was included along with a generic distillation loss and doing away with the sum over $m$? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I would like for the authors to make Algorithm 1 more clear as described above. It would also be nice to see visuals when there is just 1 sample per subset (i.e., K=IPC) even with just IPC 2 or 3 since this case would likely have the largest differences between the subsets. Lastly, I recognize his surname is quite long, but Figure 1 should probably cite "Cazenavette et al." rather than "George et al." :) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! We sincerely appreciate the time and effort you dedicated to reviewing our work and answer your questions as below. --- **Q1:** Algorithm 1 is a bit confusing to read. **A1:** We have made revisions to our Algorithm 1, and have appended the revised version in the response PDF. Regarding your concern about clarity in the optimization process, we have now included an inner loop in Algorithm 1, which explicitly illustrates how each subset is optimized in SeqMatch. We hope that this addition clarifies the approach and enhances the understanding for readers. We have added a conditional statement (lines 6-9) in Algorithm 1 to provide a clear indication of this step. In addition, we have revised the section on input parameters to better explain the notation used. Specifically, we have explicitly defined $N$ as the number of iterations in optimizing each subset. We have introduced a new input parameter, "Base Distillation Method" $\mathcal{A}$, which indicates for the embedding of other distillation methods with SeqMatch. **Q2:** visuals with ipc = 2 and 3. **A2:** We have conducted experiments for $K=2$ and $K=3$ subsets with $\texttt{ipc}=1$ in each subsets by our proposed SeqMatch. The experiments is based on the MTT baseline in CIFAR10 dataset. The evaluation accuracies are listed as below, | IPC|2|3| |:----------- | :-----------: |:-----------: | |MTT|51.6|54.5| |SeqMatch|52.9|57.0| SeqMatch outperforms MTT with performance enhancemanents of $\\{1.3\\%,2.5\\%\\}$ under the settings ipc=$\{2,3\}$. For SeqMatch, we designated parameter values of max start epoch as ${2, 4, 6}$ while retaining the remaining parameters consistent with the ipc=$10$ configuration. It is noteworthy that SeqMatch(ipc=2) consists of the first two subsets of SeqMatch(ipc=3). As a result, the visualization corresponding to SeqMatch(ipc=2) is seamlessly embedded as the first two rows within the SeqMatch(ipc=3) visualization. The visualization is demonstrated as the **Figure 4** in the response PDF. However, it is imperative to acknowledge that the disparity between MTT and SeqMatch-MTT is comparatively less pronounced than that illustrated in Figure 3 of our paper. This variance can be attributed to the smaller max start epoch configuration (${2, 4, 6}$) for SeqMatch(ipc=3), a parameter designed to ensure subsequent subsets contribute to enhancing the performance of the initial subset. **Q3:** improper citation. **A3:** We sincerely apologize for the errors in the citation of MTT[5]. MTT[5] is a crucial baseline method that has greatly influenced our current work, serving as a significant milestone in the dataset distillation task. We have made the necessary revisions in the citation, accurately referencing "Cazenavette et al." in the revised version of our paper. We will keep it in mind in our following work to correctly cite the research papers. We are grateful for pointing out the overlooked mistakes of our submission. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for answering my questions, I raise my score to an 8. The visuals for K=IPC are particularly interesting; several classes have very stark differences between images. For example, looking at the frogs, the first image can almost be thought of as an "average" frog. The next biggest performance boost can then be gained by "learning about" bright green frogs. After that, the next biggest performance boost can be gained by "learning about" brown frogs. I wonder if this process were repeated for more IPC if we would start to distill samples that resemble the "harder" or "long-tail" samples from the original dataset, like one of the red, yellow, or blue frogs you can see [here](https://knowyourdata-tfds.withgoogle.com/#dataset=cifar10&filters=default_segment.cifar10.label.value:frog). --- Reply to Comment 1.1.1: Title: Thanks for the comments and raising the score Comment: Dear reviewer QS3g, Thank you for your prompt response! We have taken note of the color variations among synthetic instances from various sequential subsets. Drawing from your insights, we would like to propose a hypothesis: could enhancing the "mutual orthogonality" of the synthetic dataset potentially contribute to improved overall performance? This notion stems from considering each synthetic instance as a fundamental "basis" element. In an effort to substantiate this hypothesis, we are contemplating extending the application of SeqMatch to scenarios involving larger ipc values (>50) and a higher number of subsets (K). Such an expansion could serve as a means to test the hypothesis. Your feedback has prompted us to embark on this exploration. Best regards The authors --- Rebuttal 2: Title: New findings inspired by your comments Comment: Dear reviewer QS3g, We deeply appreciate your constructive suggestions and thorough review of our work. Drawing inspiration from your comments, >start to distill samples that resemble the "harder" or "long-tail" samples from the original dataset We hypothesize that the synthetic data within S3 (representing hard data) of SeqMatch-MTT can serve as a condensed subset of challenging instances, capable of being utilized to 'concentrate on a sparse set of challenging examples,' similar to the approach of FocalLoss[55]. This potential utilization could enhance the performance of a standard training procedure using the original dataset. Consequently, we conducted experiments to empirically test this hypothesis. |ResNet-18|Original Dataset | Original Dataset + S3 of SeqMatch-MTT| |:-:| :-: |:-: | |CIFAR10|95.74|96.38 (**+0.64**)| |CIFAR100|78.05|78.64 (**+0.59**)| The experimental results **align with our expectations**; S3 enhances the standard training of the original dataset by **0.64%** and **0.59%**. This investigation could be a potential application of SeqMatch, to operate in reverse, further amplifying the performance improvement achieved through standard training. In details, the standard training employs basic data augmentation and SGD optimizer(0.05 learning rate, 0.9 momentum, cosine annealing lr, 5e-4 Weight decay) for 200 epochs. The batchsize of original dataset is 128, we randomly sample 13 instances from S3 and integrate with the original mini-batch in each iteration (128 original +13 SeqMatch S3). Best Regards The authors [55] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.
Summary: This paper propose a new method called sequential subset matching (SeqMatch) for dataset distillation. The proposed method is designed to continuously generate synthetic images at different training (distillation) iteration. This strategy is inspired by the general mechanism of optimization, which captures characteristics (low-level feature) of easy instances in an early stage, but takes characteristics (i.e., high-level feature) from increasingly difficult instances. SeqMatch was applied to various dataset distillation methods and showed marginal but better performance than the method in which SeqMatch was applied or other baseline methods in various four datasets. Strengths: 1. The analysis about the general mechanism of optimization is so insightful that it deserves a lot of attention in other studies. 2. The plots of several figures provide good support for the arguments in this paper. e.g., Figures 1 and 2 well illustrate the effect of the general mechanism of optimization on dataset distillation and the coupling issue, respectively. Weaknesses: First of all, I don't understand the motivation behind designing some SeqMatch. 1. How was the claim for the optimization mechanism that captures low-level features in the early stages and high-level features in the later stages verified? Figure 1 seems to have been used to verify this claim, but can hard instances and easy instances represent high-level and low-level features, respectively? 2. I don't understand how the analysis of the general mechanism of optimization was used to design the SeqMatch method. In particular, what is the motivation for applying the SeqMatch method when training f_theta in the evaluation phase? In addition, in Table 1, most of the performance improvements acquired via SeqMatch are very marginal (~0.2). Performance improvement is not an absolute determining factor in judging the superiority of the proposed method, but it can be used to verify that the method works as claimed in this paper. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 1. In "Gradient Matching Methods" in Section 3, are the gradients from g_1 to g_M the gradients from different layers of a network? Or gradients from another training iteration? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: This paper adequately addressed the limitation and promised to solve it in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** How was the claim for the optimization mechanism that captures low-level features in the early stages and high-level features in the later stages verified? **A1:** The derivation of the claim has been explicitly presented in lines 161-166 of our paper. In support of this claim, it is crucial to highlight that relevant citations [1], [14], and [44] emphasize that Deep Neural Networks (DNNs) exhibit an optimization pattern where they initially prioritize learning from simpler instances and gradually adapt to more complex ones. It is noteworthy that the term "low-level" features, referring to features primarily extracted by the lower layers, is established in [44]. Our contribution lies not merely in the exposition of this claim, but rather in the extension and validation of these findings within the specific context of dataset distillation. By delving into the task of dataset distillation, we broaden the scope of these existing claims, exploring their applicability and implications in a distinct domain. In the context of dataset distillation, these designated training instances are repurposed as a validation set to assess the efficacy of knowledge encapsulated within the synthetic dataset. An effective synthetic dataset should manifest an equivalent loss reduction across both easy and hard instances, mirroring the behavior of a standard vanilla training set. However, as vividly depicted in Figure 1, the current distillation method falls short in achieving this equilibrium, resulting in disparate loss reductions for easy and hard instances. Conversely, our proposed method showcases demonstrable improvements in mitigating this discrepancy, effectively bridging the gap between the two instance categories. **Q2:** how the analysis of the general mechanism of optimization was used to design the SeqMatch method. In particular, what is the motivation for applying the SeqMatch method when training f_theta in the evaluation phase? **A2:** We have succinctly expounded upon the impetus driving the conception of the SeqMatch methodology within the delineated sections of our manuscript, specifically in lines 65-73 and lines 233-238. In essence, our motivation is inspited by the findings that simply increasing the size of synthetic dataset bears an analogous semblance to the amplification of weights in a singular-layer perceptron configuration, in which only a limited performance improvement could be gained. Recognizing the necessity for a more discerning approach, we advocate for a paradigmatic transition from the unidimensional architecture of a single-layer perceptron to the multi-layered intricacies of a Multilayer Perceptron (MLP) and thus encourage the distilled data to be optimized in a sequential manner. Therefore, SeqMatch requires the distilled data to be learned in a sequential manner as well in evaluation phase. **Q3:** Performance improvements acquired via SeqMatch are very marginal (~0.2) **A3:** SeqMatch has acquired an **averaged** performance improvement of **1.28%** over the baseline MTT[5] and **0.58%** over the **SOTA** IDC[21]. In particular, SeqMatch improves the performance significantly in the setting of **ipc=50** with an improvement of **1.9%** over the baseline MTT[5] and and **0.93%** over the **SOTA** IDC[21]. Furthermore, our method does not only improve specific algorithms. Our sequential learning framework can be widely applied to existing data distillation models, enhancing their performance. We also hope the versatility of our approach can be taken into consideration as a contribution. It would be **unjust** to characterize these performance enhancements as merely marginal, approaching the 0.2% range. **Q4:** In "Gradient Matching Methods" in Section 3, are the gradients from g_1 to g_M the gradients from different layers of a network? Or gradients from another training iteration? **A4:** This represents the trajectory of gradients from the initial state $\theta_0$ to the converged weights $\theta_M$". Consequently, $g_1$ through $g_m$ denote the gradients targeted for optimization in each iteration of a conventional optimization process. --- [1] Devansh Arpit, Stanisław Jastrz˛ebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In International conference on machine learning, pages 233–242. PMLR, 2017. 4 [14] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018. 4, 8 [44] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pages 818–833. Springer, 2014. 2, 4 --- Rebuttal 2: Title: A request for reviewing the rebuttal Comment: Dear reviewer Qidh, We appreciate the time you dedicated to reviewing our submission. Recognizing that you may have a busy schedule, particularly when evaluating areas outside your own research field, we kindly request that you consider investing some time to explore more about dataset distillation. This emerging and pivotal machine learning task holds significant importance. We hope you can engage in a comprehensive review of both the rebuttal we have submitted and the insightful comments provided by Reviewer QS3g, an expert in the field of dataset distillation. Your constructive feedback is immensely valuable to us, and we sincerely thank you for your reviewing. Best The authors --- Rebuttal 3: Comment: Dear Reviewer Qidh, This is another friendly reminder to acknowledge that you have read the rebuttal and the other reviews. Please also share how they change your view on the paper, if at all. Thanks again for your service! Best, AC --- Rebuttal Comment 3.1: Comment: Sorry for the late reply. I'm late reading some of the previous work on dataset distillation. Most of my concerns have been resolved, so I will raise my initial rating. --- Reply to Comment 3.1.1: Comment: Dear reviewer Qidh, Thank you for your feedback during the review of our paper. We have incorporated your suggestions and rephrased lines 146-148 in Section 3 to enhance clarity. We have also carried out additional experiments, as detailed in A2 for reviewer QS3g and in A3, A6, A7, A9 for reviewer xCSw, in order to provide **a comprehensive evaluation** of our proposed SeqMatch. Thank you once again for your timely response. Best The authors
Summary: This paper investigates an issue with dataset distillation, where synthesized datasets tend to overly condense low-level features but fail to efficiently incorporate high-level ones. The authors argue that this is due to existing methods treating the synthetic dataset as a unified entity and equally optimizing each instance, leading to a coupling issue as the size of the synthetic dataset increases. To address this problem, they propose a new dataset distillation strategy called Sequential Subset Matching (SeqMatch). SeqMatch divides the synthetic dataset into multiple subsets and optimizes them in sequence, mimicking the learning process from low-level to high-level features. This approach allows each subset of the synthetic dataset to progressively capture more complex, high-level features, reducing the coupling issue and enhancing overall performance. Strengths: This paper is well-written, well-motivated, and well-organized. The authors provide comprehensive experiments on various datasets such as CIFAR-10, CIFAR-100, TinyImageNet, and subsets of the ImageNet. They provide insightful analysis of the experimental results, discussing the impact of different factors. The authors provide a detailed algorithm description that translates their theoretical insights into practical application. Weaknesses: 1. It's not clear how generalizable these results are to other tasks and datasets. Their experiments are based on specific DNN architectures and datasets, and the proposed method's effectiveness might vary under different conditions. 2. Since the method divides the synthetic dataset into subsets and optimizes them sequentially, there might be a risk of overfitting, especially when the number of subsets is high or when subsets are small. 3. The method proposed requires a series of sequential optimization processes, which could potentially increase the computational cost and time required for training. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How do you account for the randomness in the initialization of the synthetic dataset, and how does it affect the rate of convergence of each synthetic instance? 2. The authors mention that instances sharing a similar initialization within the same class will converge similarly. Could this lead to any form of bias in your findings? 3. The authors identified that current gradient matching methods prioritize easy instances during early epochs. What is an effective mechanism for identifying or quantifying the 'difficult' instances to create a more balanced emphasis? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: In section 5.4, the authors discussed two limitations of their work. Their openness in acknowledging these limitations adds to the credibility of their research and provides useful guidance for follow-up studies in this field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** It's not clear how generalizable these results are to other tasks and datasets. **A1:** Our chosen experimental framework adheres rigorously to the configurations delineated by a suite of established dataset distillation benchmarks, specifically MTT[5], DM[46], CAFE[42], KIP[34,45], IDC[21], FTD[11], and Haba[31]. This standardized experimental setup encompasses datasets, network architectures, and evaluation metrics, thereby ensuring an equitable and unbiased comparison among various dataset distillation methodologies. Furthermore, we extended our experimental endeavors to encompass additional cross-architecture generalization assessments, as presented in Table 2. These results verified the **superior cross-architecture generalization** inherent to our proposed SeqMatch method. It's important to highlight that conducting experiments under outlier dataset or architecture settings would yield outcomes devoid of substantive relevance or meaningful insights. **Q2:** There might be a risk of overfitting. **A2:** The size of each subset bears an impact on the efficacy of SeqMatch. Our parameter study on the number of subsets ($K$) reveals a significant correlation: subsets with insufficient instances inevitably falter in their ability to encapsulate essential features inherent to the original dataset. Our parameter study on SeqMatch-MTT in CIFAR10 dataset is reported as below, | K |1|2|3|4|5| |:----------- | :-----------: |:-----------: |:-----------:|:-----------: |:-----------:| |ipc=10|65.3|66.2|65.6|65.0|63.5| |ipc=50|71.6|73.2|74.4|74.1|74.3| The outcomes distinctly indicate a decline in performance as K escalates from 2 to 5 when ipc=10. A plausible explanation for this trend is rooted in the inherent challenge faced by subsets characterized by an insufficient number of images (ipc < 5) to effectively encapsulate comprehensive knowledge from the target dataset. An evidense to support this explaintion is that the subset with ipc >=10 will no longer exhibit a decline in performance (results of ipc=50). **Q3:** which could potentially increase the computational cost and time **A3:** We acknowledge the escalation in computational demands and have addressed this limitation in lines 360-367 of our paper. However, it's important to note that the augment in computations is not directly proportional to the number of subsets. This is attributed to the reduction in training iterations for each synthetic subset, wherein only a segment of teacher trajectories is condensed. Of greater significance, the computational expenditure does not constitute the principal concern within the dataset distillation endeavor. Highlighting this, the most recent survey paper on dataset distillation [49] underscores the discernible gap in performance between synthetic and original datasets, alongside the substantial memory overhead inherent to dataset distillation, thereby impeding its broader adoption in real-world scenarios. In addition to augmenting performance, our proposed SeqMatch offers an ancillary advantage – a reduction in memory expenses associated with dataset distillation. We report the memory usage of MTT and SeqMatch-MTT on CIFAR10 as below, | IPC |10|50| |:----------- | :-----------: |:-----------: |MTT|10,108 MiB (100%)|34,540 MiB (100%)| |SeqMatch-MTT|7,164 MiB (70.8%)|15,252 MiB (44.2%)| The memory used by SeqMatch is **reduced** significantly in particular in the settings with huge memory usage (ipc=50). **Q4:** How do you account for the randomness in the initialization of the synthetic dataset, and how does it affect the rate of convergence of each synthetic instance? **A4:** As **emphasized** within lines 196-202, it is evident that discrepancies in initialization and pre-assigned labels engender divergence in the convergence patterns of synthetic data. To address this, we introduce a novel metric termed the **"amplification function,"** as presented in line 212, to quantitatively measure these disparities. Our empirical investigations, demonstrated in Figure 2, illustrate that such discrepancies give rise to a coupling issue that obstructs the effective condensation of features from instances within the $\mathcal{S}^-$ subset (comprising instances with small amplification values). This investigation and the associated formulation is one of the **major contribution** of our work. The existing dataset distillation methods such as MTT[5], IDC[21], and CAFE[42] tend to clone a random instance from the original dataset as the initialization of the synthetic dataset, which aggravates the divergence in the convergence patterns of synthetic data. **Q5:** The authors mention that instances sharing a similar initialization within the same class will converge similarly. **A5:** We find the logic behind this inquiry somewhat perplexing. Is the reviewer implying that our claim might not be entirely accurate and could introduce a bias into our findings? In response, we assert that this is not the case. Our claim doesn't serve as either an assumption or a theoretical foundation for our findings. Rather, it represents a reasoned explanation, inspired by references [1], [14], and [44], to interpret the experimental outcomes showcased in Figures 1 and 2 of our paper. **Q6:** What is an effective mechanism for identifying or quantifying the 'difficult' instances? **A6:** The establishment of an effective approach for identifying or quantifying instances deemed 'difficult' presents a prospective avenue for future exploration. Currently, our approach involves employing the average loss reduction observed during standard training as a metric to discern these 'difficult' instances, in alignment with the methodology expounded in [14]. An adapted mechanism within the context of dataset distillation, capable of quantifying and recalibrating the intrinsic "hardness" of instances, offers a promising direction for intensified and enhanced dataset distillation strategies. --- Rebuttal Comment 1.1: Title: Thanks for the comments and raising the score Comment: Dear reviewer u5kd, We would like to thank you for the valuable advice and for raising the score of our paper. We will incorporate your suggestions, especially regarding the effective mechanism that **quantifies and reinforces** the 'difficult' data in the dataset distillation process. Best The author --- Rebuttal 2: Title: A request for reviewing the rebuttal Comment: Dear reviewer u5kd, Thank you for your advice and questions regarding the review of our paper. Serving as a reviewer greatly contributes to the advancement of deep learning research, and it is a crucial duty for researchers. We understand that reviewers have busy schedules, but we sincerely hope that you can spare a moment to review the rebuttal we have submitted to address your concerns. We kindly request you to thoroughly review our rebuttal along with the comments from the reviewer QS3g. Once again, we extend our gratitude for your assistance and hope that we can address all of your questions before the end of the discussion period. Best The authors --- Rebuttal Comment 2.1: Comment: Thanks for addressing my concerns and I appreciate the authors' thorough rebuttal. I would like to raise the score to be 5: Borderline accept. --- Rebuttal 3: Comment: Dear review u5kd, We express our gratitude for your valuable feedback and your dedicated service as a reviewer within the research community. Best The authors
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers and the Associate Chair for dedicating their time and effort to the review of our work. We appreciate the constructive questions and suggestions that have contributed to the enhancement of SeqMatch. Based on the feedback received, we have made revisions to our work as outlined below: 1. We have revised Algorithm 1 as per the suggestion made by reviewer QS3g in order to enhance its clarity. We introduced an inner loop to provide explicit illustration of the optimization process for each subset within SeqMatch. Additionally, we incorporated an input parameter labeled "Base Distillation Method" to signify the potential integration of other distillation methods with SeqMatch. The updated version of Algorithm 1 has been appended to the response PDF for reference. 2. We have appended the visulizations of SeqMatch (**Figure 4** in the response PDF) under the setting ipc=$\{2,3\}$, alongside SeqMatch-MTT, for the CIFAR 10 dataset. The corresponding evaluation accuracies are presented below: | IPC|2|3| |:----------- | :-----------: |:-----------: | |MTT|51.6|54.5| |SeqMatch|52.9|57.0| SeqMatch outperforms MTT with performance enhancemanents of $\\{1.3\\%,2.5\\%\\}$ under the settings ipc=$\{2,3\}$. 3. We have presented the abation study over number of subsets $K$ as suggested by the reviewer xCSw. The results is visulized as **Figure 5** in the response PDF. The outcomes distinctly indicate a decline in performance as K escalates from 2 to 5 when ipc=10. A plausible rationale for this phenomenon lies in the fact that subsets with insufficient images (ipc < 5) struggle to effectively distill comprehensive knowledge from the target dataset. Conversely, the degradation in performance as K increases remains marginal when ipc=50. This is potentially attributed to the subset size surpassing the threshold required to adequately capture essential features (ipc > 10), thereby substantiating the observed trend. 4. We have studies the memory usage of SeqMatch. The high memory overhead is one of the critial bottolneck of dataset distillation [8][49]. In addition to augmenting performance, our proposed SeqMatch offers an ancillary advantage – a reduction (up to **56% reduction**) in memory expenses associated with dataset distillation. We report the memory usage of MTT and SeqMatch-MTT as **Table 3** in the response PDF. 5. We have modified the improper citation and the consistence issue of references. We also append more reference related to our work as below. [49] Yu, Ruonan, Songhua Liu, and Xinchao Wang. "Dataset distillation: A comprehensive review." arXiv preprint arXiv:2301.07014 (2023). [50] Cazenavette, George, et al. "Generalizing Dataset Distillation via Deep Generative Prior." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [51] Liu, Yanqing, et al. "DREAM: Efficient Dataset Distillation by Representative Matching." arXiv preprint arXiv:2302.14416 (2023). [52] Sachdeva, Noveen, and Julian McAuley. "Data distillation: A survey." arXiv preprint arXiv:2301.04272 (2023). [53] Zhang, Lei, et al. "Accelerating dataset distillation via model augmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [54] Loo, Noel, et al. "Dataset Distillation with Convexified Implicit Gradients." (2023). Pdf: /pdf/61894a40c84b68a7214f7ba916c3c607081c8c40.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Maximum Average Randomly Sampled: A Scale Free and Non-parametric Algorithm for Stochastic Bandits
Accept (poster)
Summary: The authors propose a novel non-parametric algorithm for stochastic bandits, based on a sub-sampling scheme. While other algorithms using sub-sampling have been recently proposed, their approach significantly differs from these works. Indeed, instead of using sub-sampling to perform pairwise comparisons, they use it to build exact confidence intervals used in a UCB algorithm called MARS. To do that, they build on results provided by statisticians in the 60s and 70s, showing that sub-sampling can be used to build a set of *typical values* for the true mean of each distribution. After describing their method, the authors provide the theoretical guarantees of MARS: logarithmic regret under the assumption that the distributions are continuous and symmetric, and some exponential moment condition similar to the one used for $\psi$-UCB. Interestingly, these guarantees are achieved while the algorithm does not use the function $\phi$ directly, hence avoiding sub-optimal choices of $\psi$, which is the advantage of non-parametric algorithms. Then, a set of experiments aim at validating the method. Strengths: I think that the paper is overall well-written and clear, and that the algorithm and theoretical results are well presented and easy to understand. While the literature in bandits is now quite vast, it is difficult to come up with new principles, especially in the standard stochastic case. Recently, the literature on non-parametric algorithms based on bootstrap and sub-sampling has proposed interesting new approaches, and this paper is in my opinion a nice addition to this literature, showing new potentialities. In particular, the theoretical guarantees hold under original assumptions, for which no other algorithms is proved to work (additional assumptions would be needed). Furthermore, the empirical results validate the approach. Overall, I think that the paper brings enough interesting insights for publication at Neurips, up to some changes listed below. The derivation of the technical results seem correct to me. Weaknesses: In my opinion the paper has two major weaknesses, that are related: * I believe that in the context of the paper the literature review on non-parametric algorithms should be more precise, and the contribution of MARS compared to these approaches should be better explained. First, the presented algorithms are not all based on "subsampling", GIRO, PHE, Bootstrap-UCB and Reboot are closer to bootstrap, while only BESA and SDA are technically speaking based on sub-sampling. Those approaches are rather different, work under different assumptions, and do not share the same pros and cons. Furthermore, the last line "One drawback of..." is misleading, and make the reader believe that the said drawbacks are shared by all these algorithms, would be solved by MARS, and are in fact the motivation for MARS; while it seems to me that this all these perceived messages are wrong. In my opinion, there are several axis that should be detailed when talking about these algorithms: the family of distributions for which they work (and maybe how tight are their guarantees for these families), what they need to know about the distributions, and their cost (memory and time). For instance, PHE is a faster and memory-less alternative to GIRO, but its guarantees are sub-optimal and restricted to bounded distributions, and there is a tunable parameter (but this method easily generalize to structured settings, which is its main strength). On the other hand, SDA works in much broader setting (families of distributions for which a "balance" condition is satisfied), but this generality costs the fact of storing all observations in memory: this is in my opinion similar to what is obtained with MARS. Furthermore, the question of improving the computation time and memory for SDA has been studied for the LB-SDA algorithm (http://proceedings.mlr.press/v139/baudry21b/baudry21b.pdf). * (Related) the memory and computation time of MARS are not well discussed. In particular, it seems worse than most existing approaches: maintaining the sub-sample means for each arm costs $O(Kn^2)$ (if $\delta=1/n^2$ is set), while at most $O(n)$ is needed for the benchmarks. Furthermore, the update of the UCB also requires to sample $n^2$ Bernoulli variables at each time step, which is also much larger the cost of (even the less "optimized") existing non-parametric approach. Those are major drawbacks in my opinion, that should be explicitly discussed, providing comparisons with benchmarks. However, this does not mean than the approach is not interesting: I believe that the authors should really focus on the fact that their algorithms have guarantees under original assumptions for which there are no competitors (or no tight competitors). * The contribution of the paper is not on the technical side, most results and techniques used by the authors are known, but I think that it is interesting enough so that it is not an issue. * I find the set of experiments not very convincing: only a few curves/benchmark algorithms are used, and I don't understand why $\delta$ is not set to the theoretically valid threshold ($4\times 10^{-6}$). In my opinion SDA (for instance LB-SDA, which is the fastest SDA) should be present in the benchmarks since it is theoretically valid for all the settings considered. I would like to see a comparison of computation times too. I believe the authors left enough space so that all these discussions could be added. In my opinion they should be carefully addressed in the revision, and I would be happy to further raise my score in that case. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * The statistical validity of the estimator holds for large enough sample size. For smaller sample sizes, the author build a UCB using the *maximum of observed data and a fixed probability of returning $\infty$ to keep the $\delta$ confidence. At first read, this step is a bit confusing, and It may have been easier to simply provide the required $\log(n)/\log(2)$ samples per arm at the initialization of the algorithm. Is there an empirical motivation for not doing that (because from the theoretical point of view this do not seem to change much)? * Related question: is it clear that the $2^{-T_i(n)}$ is tight? Since it is critical in terms of computation/memory cost it may be interesting to optimize it. * It would be interesting to understand how good the theoretical results are. You started to do that by comparing them with $\psi$-UCB and the usual bound with $\Delta^2$. Did you try to look at the Burnetas & Katehakis lower bound for this specific family of distributions? * Did you investigate an anytime version of MARS? It seems to me that the proof may easily adapt, making the algorithm more flexible. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank your detailed and insightful review. **@R4-A1) More precise literature review and clarification of MARS’s contribution** To enhance the precision and accuracy of the literature review, we revised the "Related Work" section in the paper. The text previously present in lines 56-69 was replaced with the following content: *“Recently several works have been focused on non-parametric bandit algorithms based on subsampling and Bootstrapping [5,6,16,15,20]. Those works use the empirical distribution of the data instead of fitting a given model to the data. GIRO relies on the history of past observed rewards and enhances its regret bound by augmenting fake samples into the history [16]. PHE serves as a faster and memory-efficient alternative to GIRO, demonstrating adaptability to structured settings. However, PHE has the limitation of being restricted to bounded distributions and involves a tunable parameter [15]. Reboot [20] perturbs the history in order to improve the regret bound.* *Bootstrapping Upper Confidence Bound is using bootstrap to construct sharper confidence intervals in UCB-type algorithm [11]. However, it used a second-order correction to guarantee the non-asymptotic validity of the bootstrap threshold. The second-order correction is not sharp, and it includes some scaling factors.* *Another line of works including BESA [5] and SDA [6] use subsampling to conduct pairwise comparison (duels) between arms. BESA organize the duels between arms and find the winner by comparing the empirical average of sub-sampled rewards. SDA extends the concept of BESA duels to a round-based structure by incorporating a sub-sampling scheme and it eliminates the need for forced exploration.* *Apart from Reboot which was analysed for Gaussian distributions, and SDA which was analysed for a family of distribution satisfying a balance condition (namely Gaussian and Poisson), the other algorithms were analysed for distributions with known bounded support.”* The content in line 83 of the contribution section was substituted with the following information to provide a clearer explanation of MARS's contribution. *“MARS achieves logarithmic regret without using the function $\psi(\cdot)$. Hence it avoids sub-optimal choices of $\psi(\cdot)$”* **@R4-A2) Memory and Computational Complexity of MARS** We agreed with the reviewers' observations concerning the computational complexity and memory aspects of the MARS. In order to address this important aspect, we added a paragraph after line 162. Please see Review 1’s Rebuttal section labelled **@R1-A2** for the paragraph and further details. We conducted an analysis of its runtime with alternative approaches. The relevant table can be found in the attached PDF, which has been added as an addition to Section 5 of the supplementary material. Please see **@R1-A2** for the added text explaining the result of the table. **@R4-A3) Assessing the Quality of Theoretical Results and Examination of the Burnetas & Katehakis Lower Bound** The current version presents the logarithmic upper bound for regret without directly utilizing $\psi(\cdot)$. We find the idea of exploring the possibility of establishing a lower bound for regret to be another interesting path for further theoretical investigation of MARS. We sincerely appreciate your valuable suggestion. **@R4-A4) Necessity of constructing a UCB using either the maximum of observed reward or infinity and possibility of alternative approach providing the necessary log(n)/log(2) samples per arm during the initialization phase** The mentioned modification enables obtaining a guaranteed confidence bound for any number of observations without directly using $\psi(\cdot)$ function or making any distributional assumptions on arms. The confidence bound in Table 1 was proven to be guaranteed in Theorem 2. Then, the guaranteed bound was used in proving Theorem 3. In equation (4) in supplementary material we have $$\mathbb{P}\left(\mu_1\geq{\text{UCB}}_1(s,\delta)\right)= \delta,$$ which implied by Theorem 2. Providing log(n)/log(2) samples per arm does not guarantee a confidence region for all values of $\delta$ within the range (0,1). It is important to note that Table 1 contains a typo, regarding the placement of probabilities in front of "w.p." The probabilities was swapped. The probability of assigning infinity to the upper confidence bound is not constant; rather, it diminishes as the number of times arm $i$ is pulled ($T_i$) increases. **@R4-A5) Is $2^{T_i(n)}$ tight?** We think the initial stage of the algorithm is inevitable since we aim to avoid using $\psi(\cdot)$ or any tail information. MARS is in this phase for each arm when $T_t$ is less than $\log(\delta-1)/\log(2)$. For example, when \delta=10^-6, the initial phase occurs for $T_i<2\log(10^6)/\log(2)=39.87$. It is important to note that we demonstrated that this approach yields logarithmic regret with the proposed initial phase. **@R4-A6) empirical evaluation and numerical experiments** Please see Review 3’s Rebuttal, sections labelled **@R3-A1, @R3-A2, @R3-A4**. **@R4-A7) why $\delta$ is not set to the theoretically valid threshold $4×10^{-6}$** The guidance provided in the book [17] on pages 104 proposes the following: _“This suggests we might choose $\delta \approx 1/n$ …”_ As a result, we choose a $\delta=1/1000$. To be fair, the selection of $\delta$ for both Vanilla UCB and Bootstrap UCB is the same. **@R4-A8) Anytime version of MARS** Along the same line of Theorem 2.1's proof on Page 11 of reference [7], we can derive a logarithmic upper bound for regret by selecting $\delta_t = 1/{t^2}$. We would like to thank once again for the time, effort and your insightful feedback. --- Rebuttal Comment 1.1: Title: post-rebuttal comment Comment: I read the other reviews and the authors rebuttal, that confirm my evaluation of the paper. I want to thank the authors for their careful clarification, and I believe that if the authors take in account the comments made in all the reviews for their revision (such as discussing more the practical aspect of the algorithm: memory and empirical performance) the paper will be ready for publication.
Summary: The paper presents a new approach to develop a data-dependent upper confidence bound to replace the classical UCB based on concentration inequalities. The data-dependent bound is constructed using sub-sampling of rewards and offers a tighter estimate on the error than the classical UCB resulting in improved performance. Strengths: The approach adopted in the paper is quite interesting and seems to be significantly simpler than existing studies on data-dependent bounds. Although Theorem 1 is based on an existing result, its application to bandits is interesting and I haven't seen that before. Weaknesses: I am convinced that there is some novelty in the paper, especially on the theoretical aspect. However, I feel that it may not be sufficient for a paper like this to warrant a publication. While data-dependent bounds are interesting to study and analyze, they themselves have little to offer in terms of advancing the theoretical understanding of multi-armed bandits. These collection of studies that focus on data-dependent bounds are, at a fundamental level, studies that focus on improving practical implementation of MAB. Specifically, as pointed out even by the authors, the classical UCB requires some additional information, that is often difficult to obtain in practice, thereby making such data-dependent approaches a practical alternative. For a result that is fundamentally based on improving practical implementation, the paper has very limited empirical evaluation. I strongly suggest that the authors add more detailed empirical evaluation, both in terms of comparison with more algorithms like GIRO and PHE and in terms of more complicated examples, preferably a real world example. A more extensive empirical evaluation will demonstrate the actual improvement offered in practice by this new data-dependent estimator and the existing theoretical results will _support_ and _explain_ those results, making a strong overall paper. I am willing to increase my score if the authors can add more detailed experiments with proper analysis. I do understand that the rebuttal period might not be sufficient for that in which case, I suggest the authors to resubmit with more experiments. The results definitely seem promising. EDIT: Score updated based on rebuttal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of the theoretical novelty presented in our work. To address your concerns regarding the empirical evaluation of MARS several experiments were added to the paper described below: __@R3-A1) Enhancing Empirical Evaluation of MARS in Paper__ As proposed GIRO and PHE were implemented and compared with MARS in all setups, i.e., Gaussian,Truncated Gaussian, Uniform, Exponential, and Bernouli. The specifics of these changes are outlined below. The attached PDF file contains updated versions of Figures 2(a), 2(b), and 3. The original Figures 2(a), 2(b), and 3 were replaced with these new versions. After adding the new simulations and experiments, we revised the Section 3 and lines 194 to 220 were replaced with the following text. *“First, consider the rewards corresponding to all arms are Gaussian with variance 1. The cumulative regrets are show in Figure 1 (a). Since Normal-TS uses the distribution knowledge and the variances are correct in this case it is the best as expected. The Vanilla UCB algorithm demonstrates comparable or superior performance compared to both the Bootstrapped UCB, MARS, and GIRO. The performance of the PHE approach is heavily dependent on the parameter $a$. When $a=2.1$, it shows a linear regret. However, for $a=5.1$, it outperforms most other approaches, except for Thompson sampling.* *Both Vanilla UCB and Normal-TS depend on the variances which were assumed known in the previous simulation. We repeat the former simulation where the variances are incorrectly set to 2. The result is shown in Figure 2(b). Evidently MARS, GIRO, and Bootstrapped UCB outperform both Vanilla UCB and Normal-TS when incorrect values for variances are used. MARS demonstrates superior performance over Bootstrapped UCB and GIRO after an initial set of rounds. Moreover, unlike GIRO and Bootstrapped UCB, MARS does not require the full storage of reward history, resulting in lower computational complexity. As previously observed, PHE with a value of 2.1 demonstrated the poorest performance, whereas PHE with a value of 5.1 has the best performance.* *The MARS and GIRO does not use the tail information of the rewards. However, Vanilla UCB, Normal-TS, and Bootstrapped UCB use the distribution and the tail information of the rewards respectively and their performance can deteriorate when the prior knowledge is wrong or conservative.* *To illustrate this we repeated the simulation for the cases where the rewards admit uniform distribution over $[-1, 1]$. The results are shown in Figure 3. It shows that MARS which does not use the distribution of the rewards and the tail information, outperforms the other methods except PHE ($a=2.1$) in this case. An intriguing observation is that PHE ($a=5.1$) exhibits outstanding performance in a Gaussian setup, yet it performs poorly in a Uniform setup, indicating a strong reliance on the tunable parameter. This dependence on the parameter could pose challenges in real-world applications where the environment is unknown. Consequently, making methods like MARS and GIRO more practical alternatives.* *For additional simulations in the exponential setup and Gaussian setup, refer to Section 5 in the supplement.”* __@R3-A2) Enhancing Empirical Evaluation of MARS in Supplementary material__ Three simulations were included in section 5 of supplementary material. The attached PDF file contains revised versions of Figures 1 and 2 in the supplementary. A new simulation and table were added to the supplementary to assess MARS for non-symmetric distributions and compare its runtime with other approaches. Detailed explanations of these additional simulations are provided herein. __(Truncated Gaussian Setup):__ Methods GIRO, PHE were added to the simulation and the explanation is changed as below *“The numerical experiment conducted in Section 3 of the main paper is replicated in Figure 1, utilizing Gaussian rewards truncated within the range of $[-1, 1]$. In line with the uniformly distributed rewards presented in the main paper, the results shows that MARS outperforms the other methods except PHE (a=2.1) thanks to not utilizing reward distribution and tail information.”* **(Exponential Setup)** New methods, GIRO, PHE was added to the simulation, leading to modifications in the explanations. *“MARS is a well-suited method for handling a wide range of symmetrically distributed rewards, as it does not rely on tail information. In this context, we replicate the numerical experiment from Section 3 of the main paper, using exponentially distributed rewards. In this experiment, we also include a comparison of MARS with the BESA approach using sub-sampling [5], to provide a comprehensive evaluation. Figure 2 clearly demonstrates that MARS outperforms the other methods including BESA, PHE (a=2.1), and PHE (5.1).”* __@R3-A3) Runtime Analysis of the MARS__ To further exploration of the efficiency of MARS and its applicability in real-world scenarios, we conducted an analysis of its runtime with alternative approaches. Please see **@R1-A2** or further details. __@R3-A4) MARS performance for non-symmetric reward distributions (Bernoulli)__ The Bernoulli Bandit is a framework for tackling various real-world challenges, like online advertising, where advertisers must make prompt decisions on which ad to present to a user to maximize the likelihood of the user clicking on the ad. In this scenario, the assumption of MARS, i.e., rewards are distributed symmetrically around the mean, is not satisfied. To evaluate the robustness of MARS and its effectiveness in real-world applications when not all assumptions hold true, we implement MARS in the this setup. Through this evaluation, we can gain insights into MARS's suitability and performance in practical applications despite deviations from ideal assumptions. Please see **@R1-A1** for further details. We would like to thank once again for the time, effort and your insightful feedback. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you for your detailed response. I feel the experimental results are promising and support the paper well. The authors have addressed my main concern and I am raising my score to 6.
Summary: The paper proposed a new method to compute the upper confidence bound (UCB). The new method does not require knowing or estimating the scale parameters. The property shown by Theorem 2 addresses the conservative issue of the existing methods. The practical importance of free from modeling the scale parameters is verified in the experiments. Strengths: - (a) The paper contributes a new way to compute UCBs of practical importance; it does not require scale information. - (b) Rigorous theoretical analysis. - (c) Experiments demonstrate the advantage of the proposed UCBs. Weaknesses: - (d) What are typical values? It is defined in Definition 1, but its intuition and usage are unclear. Also, it would be great to explain the typical value's role in proving Theorem 2 (the only mention I saw of it is in the proof of Proposition 2). - (e) Maybe adding a discussion (after Line 192) about recovering the worst-case bound would make the paper more complete? - (f) Is it better to start the paper with an Introduction? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - (g) The symmetric assumption restricts the applicability of this submission. Can we remove it? What would the challenge(s) be? - (h) Visualizing the differences between the choice of scale parameters (Figure 1) and the choice of prior (Figure 2) is nice. Is it possible to demonstrate the difference between the conservative approaches (Lines 51--54) and the non-conservative (Lines 141--143) UCBs? - (i) Between the scale-free (this submission) and the scale-aware (baselines compared in this submission) approaches, another approach would be estimating the unknown scale parameters. If some scale-estimation approach does not require the symmetric assumption, it might be a strong baseline to be compared. A related paper would be (http://proceedings.mlr.press/v119/zhu20d/zhu20d.pdf). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer for insightful comments and positive feedback on the conducted theoretical analysis. **@R2-A1) What are typical values? It is defined in Definition 1, but its intuition and usage are unclear. Also, it would be great to explain the typical value's role in proving Theorem 2 (the only mention I saw of it is in the proof of Proposition 2).** The typical values in Definitions 1 consist of random variables that partition the real line into equally probable segments, where the true mean is positioned within one of these segments with identical probability. Theorem 1 proves that the mean calculated from sub-samples (with a probability of $1/2$) forms a set of typical values. This concept helps to establish a guaranteed upper confidence bound, as shown by Theorem 2 without using any concentration inequalities which are often conservative and including scaling parameters. Typical values is crucial in the proof of theorem 2 which was later used in algorithm and regret analysis. To clarify that point, the following sentence is included after Theorem 1. *“Theorem 1 shows that $M-1$ estimates of mean computed by random sub-sampling are a set of typical values for $\mu_i$ and partition the real line into equiprobable segments where $\mu_i$ belong to each one of those segments with equal probability.”* **@R2-A2) Maybe adding a discussion (after Line 192) about recovering the worst-case bound would make the paper more complete?** To clarify the regret bound lines 187-192 is revised as below *“As shown in equation (6), the regret bound for MARS is always $O(\log(n))$ without relying on the use of $\psi(\cdot)$. When $\psi_i^\*(\Delta_i)<1.59$, the task becomes more challenging as identifying the optimal arms becomes harder. In such scenarios, both the regret bounds for the proposed MARS and the $\psi$-UCB, which employs $\psi(\cdot)$, become dependent on the function ${\log(n)}/{\psi_i^\*(\cdot)}$. This demonstrates the effectiveness of the introduced non-parametric UCB method. Corollary 1 also explore the effectiveness of MARS when dealing with subgaussian rewards. It demonstrates that even without prior knowledge of the $\sigma_i$ values, MARS successfully addresses bandit problems, achieving a regret bound of $O(\sum_{i:\Delta_i>0}\log(n)/\Delta_i)$ for challenging scenarios where $(\Delta_i^2)/(2\sigma_i^2)<1.59$.”* **@R2-A3) Is it better to start the paper with an Introduction?** The lines 17 to 96 of the paper was contained in a section labelled Introduction as proposed. **@R2-A4) The symmetric assumption restricts the applicability of this submission. Can we remove it? What would the challenge(s) be?** Please see the first response in reviewer 1’s rebuttal labelled **@R1-A1** **@R2-A5) Visualizing the differences between the choice of scale parameters (Figure 1) and the choice of prior (Figure 2) is nice. Is it possible to demonstrate the difference between the conservative approaches (Lines 51--54) and the non-conservative (Lines 141--143) UCBs?** In Section 3-Experiment a comparison among MARS using a non-conservative UCB and other methods were done. Two more methods GIRO and PHE were also added to the simulation which are available in the figure in PDF attached. The Vanila UCB in those simulations uses a concentration inequality including a scaling parameter. The Thompson sampling implemented in that section uses Gaussian prior. The performance of these methods was assessed and compared with that of MARS, along with several others. **@R2-A6) Between the scale-free (this submission) and the scale-aware (baselines compared in this submission) approaches, another approach would be estimating the unknown scale parameters. If some scale-estimation approach does not require the symmetric assumption, it might be a strong baseline to be compared. A related paper would be (mlr.press/v119/zhu20d/zhu20d.pdf).** We appreciate your suggestion regarding the comparison of MARS with methods that estimate unknown scale parameters. In line with your concerns about enhancing empirical evaluation and conducting comparative analyses with alternative approaches, we added additional simulations into the paper. As an example, we conducted a comparison of MARS with six alternative approaches within a multi-armed bandit setup with exponential distribution. The outcomes of this comparison are presented in “Figure 3 in Supplement” of the Supplementary section, available in the attached PDF. Among these six approaches, BESA (referred to as BESAMULTI) is a scale-free algorithm that selects arms through duels. As illustrated in the figure, MARS demonstrates superior performance over BESA within this exponential configuration. For additional empirical assessments and comparisons, please see response labelled as **@R3-A1** in Rebuttal for Reviewer 3. We would like to thank once again for your insightful feedback. --- Rebuttal Comment 1.1: Comment: The feedback clarifies all my questions and provides insights from other reviews' comments. I want to thank the authors' feedback on all reviews and all reviewers' comments. As a result, I would like to keep my original decision.
Summary: In multi-armed bandits (MAB), when the noise distribution is known, the UCB algorithm with a carefully constructed confidence bound achieves a gap-dependent regret depending on the noise distribution. This manuscript studies the following question: when the noise distribution is unknown, is there an algorithm that adaptively constructs the confidence bound and achieves a near-optimal regret? To answer this question, the authors used a cute observation in [Campi and Weyer, 2010]: if the distribution of a random variable is symmetric around its mean, then the sample means after subsampling (with probability 1/2) splits the real line into several equiprobable pieces for the true mean. This provides a fully data-driven approach to construct the upper confidence bound, and this manuscript shows that the resulting algorithm nearly achieves the optimal regret when the noise distribution is known. Experiments are also provided. Strengths: Overall I like this paper. This paper asks a clean question and provides a satisfactory answer to it. Although the main observation comes from [Campi and Weyer, 2010], and the analysis of the algorithm essentially parallels the traditional UCB analysis, in my opinion the application to bandit problems is still nice and makes this paper interesting enough for a NeurIPS publication. Weaknesses: I don't see a huge weakness of this work, but I can list some minor ones: 1. The main novelty is the application of the observation in [Campi and Weyer, 2010] to a data-driven confidence bound in bandits, and everything else in the paper is pretty standard. 2. The requirement that the distribution is symmetric around its mean is pretty restrictive. 3. The computational complexity of the resulting algorithm could be high and may not lead to a practically useful algorithm. Despite the above concerns, I'd like to reiterate that the merits weigh more than the weaknesses in my opinion, and I still like this paper. Detailed comments: Table I: I think the probabilities on the first two rows should be swapped. Section 1: In my opinion, Section 1 should be a standalone section without resorting to bandits. In particular, the subscripts i should be removed. Theorem 2, or the case 2^T < M: is the modification in Table I really necessary for the application to MAB? After inspecting the proof, I feel that simply choosing UCB(T, delta) = \infty whenever T < log_2(1/delta) also gives the same regret upper bound; is that true? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors say anything about adaptivity (possibility or impossibility) when the reward distribution is not symmetric around the mean? 2. Theorem 3 still exhibits a gap compared with UCB with known noise distribution when Delta is large. Do you think this gap is essential (unavoidable for any algorithms), or can be closed using some other approaches? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for insightful comments and for recognizing the utilization [Campi and Weyer, 2010] in the context of the multi-armed bandit problem. **@R1-A1) MARS performance for non-symmetric reward distributions** A new simulation was added to Supplementary material of the paper section 5 to demonstrate MARS’s performance when reward distribution is non-symmetrical. MARS was implemented when rewards have Bernoulli distribution, and the result was shown in Figure 3 supplementary material (in PDF file attached). The following text was added to the Section 5 of Supplementary material explaining the new simulation. *“To evaluate the robustness of MARS and its effectiveness where the assumption of symmetric reward distributions is not met, MARS is implemented on Bernoulli setup when the number of arms is $K = 2$ and the means are $\mu_1=0.5$ and $\mu_2=0.01$. The findings are depicted in Figure 3 (Please refer to (Figure 3 Supplement material) in the attached PDF file). The results indicate although the setup does not meet the symmetric assumption, yet MARS outperforms Vanilla UCB and Thompson sampling in this setup. Its performance is also comparable to that of BESA.”* Additionally, the paper [\*] assesses the impact of asymmetric distributions on confidence regions for model parameters using similar approach. It shows that the confidence regions remain robust to small asymmetries, showcasing the method's reliability in such scenarios. [\*] Care, Algo, Balázs Csanád Csáji, and Marco C. Campi. "Sign-Perturbed Sums (SPS) with asymmetric noise: Robustness analysis and robustification techniques." 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. The following sentence is added to the future work below line 233 to mention above reference. *“Another interesting future direction is evaluation of the method on asymmetric rewads and robustification of approach following the same principles as proposed in the paper [\*].”* **@R1-A2) Computational Complexity of MARS** Computational complexity of approach depends on the choice of $\delta$ as MARS updates $\lceil 1/\delta \rceil$ sub-sampled means in each round for the chosen arm. In order to address this important aspect of MARS, we added the following content after line 162 *“MARS necessitates keeping $\lceil 1/\delta \rceil$ sub-sampled means for each arm. This leads to a memory requirement of $O(Kn)$ when $\delta=1/n$. The computational complexity of MARS is also depends on the choice of $\delta$, as updating $\lceil 1/\delta \rceil$ sub-sampled means is performed in each round. Notably, to reduce the computational burden, the required Bernoulli variables in the algorithm can be pre-generated and stored before the start of the game.”* To further exploration of the efficiency of MARS and its applicability, we conducted an analysis of its runtime with alternative approaches. The relevant table can be found in the attached PDF, which has been added as an addition to Section 5 of the supplementary material.The following text also explain and analyse the result of the table. *“Table 1 displays the average runtime of different approaches for the multi-armed bandit with a Uniform setup when the number of rounds is set to $2000$. In MARS, the parameter $\delta$ is set to $1/1000$, requiring updates of $1000$ subsampled means in each round. As a result, it takes more time compared to Vanilla UCB, Thompson Sampling, and PHE. However, MARS is faster than approaches like GIRO, which store the full memory of rewards.”* **@R1-A3) Probabilities in the first two rows need to be swapped** The probabilities were swapped, and the typo was fixed. **@R1-A4) Section 1 can be standalone without bandits** As suggested, this section can be treated as a standalone section introducing a new upper confidence bound for random variables. The section has been modified, and the subscripts "i" have been removed. **@R1-A5) By simply setting $\text{UCB}(T, \delta) = \infty$ whenever $T < \text{log}_2(1/\delta)$, the same regret upper bound is achieved** By doing so, equation (3) in the proof of Theorem 3 in supplementary material is not true anymore as we use: $$\mathbb{P}\left(\mu_1\geq{\text{UCB}}_1(s,\delta)\right)=\delta$$ Hence the modification is necessary. **@R1-A6) Does Theorem 3's gap with UCB under a known noise distribution for large Delta remain unavoidable, or can it be closed using alternative approaches** In the current version of the algorithm, we believe this gap is unavoidable, primarily due to the assignment of infinity to UCB in the initial iterations. It is notable that the mentioned modification and assignment enables obtaining a guaranteed confidence bound for any number of observations without directly using $\psi(\cdot)$ function or making any other distributional assumptions on arms. We would like to thank once again for your insightful feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I'll keep my score and continue to recommend acceptance. Regarding A5), I believe in the analysis of UCB you only need the inequality $\mathbb{P}(\mu_1 \ge \text{UCB}_1(s,\delta)) \le \delta$ right? So using $\text{UCB}_1(s,\delta) = +\infty$ for small $s$ does not hurt the order of the regret. Or equivalently these could be treated as forced exploration rounds.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable and generally positive feedback. We are encouraged that the reviewers found our work novel in terms of *“used a cute observation in [8] [...] provides a fully data-driven approach to construct the upper confidence bound, and this manuscript shows that the resulting algorithm nearly achieves the optimal regret [...]”* (**R1**), *“Rigorous theoretical analysis”* (**R2**), *“[...] quite interesting and seems to be significantly simpler than existing studies on data-dependent bounds”* (**R3**), *“Interestingly, these guarantees are achieved while the algorithm does not use the function directly, hence avoiding sub-optimal choices of [...]”* (**R4**), and *“Recently, the literature on non-parametric algorithms based on bootstrap and sub-sampling has proposed interesting new approaches, and this paper is in my opinion a nice addition to this literature, showing new potentialities”* (**R4**). Before discussing the main changes and added empirical evaluations, it worthwhile to mention a few points about rebuttals. + The numbered citations refer to references in the submitted paper. The additional citations are labelled by [\*]. + As two new algorithms were added to all simulations and Figure 2 (a), Figure 2 (b), and Figure 3 in the paper and Figure 1, 2 in the supplementary material were replaced with the new figures found in the attached PDF file. One new figure and table was added to the paper, both of which available in the provided PDF. The figures referenced in all our responses to reviews are the new figures in the attached PDF file. + To facilitate reference and prevent redundancy, we've labelled responses to reviewers as **@R{1/2/3/4}-A{1/2/…}**, e.g. **@R2-A5** mean Answer 5 in the rebuttal letter for Reviewer 2. The principal modifications and enhancements made to the paper are outlined as follows: **Improvement of empirical evaluation of approach** As suggested by **@R3** two approaches, namely GIRO and PHE were added to all simulations and the following text was added to the paper after line 200: *“In GIRO, the parameter $a$, which represents pseudo-rewards per unit of history, is set to $1$. Due to the high sensitivity of the PHE approach to the choice of the tunable parameter $a$, simulations were performed for two values of $a$.* Additional simulations demonstrate the strength of MARS as a non-parametric and scale-free algorithm, exhibiting strong performance without relying on tail information. The PHE algorithm relies on an adjustable parameter. PHE with two values for that parameter was added to experiments. Interestingly, the parameter that performed remarkably well in the Gaussian setup exhibited the weakest performance in the uniform setup, and conversely, the parameter that excelled in the uniform setup displayed the poorest performance in the Gaussian setup. This contrast highlight the importance of scale-free methods like MARS in real world problems when reward distribution is unknown. For further discussion on the added simulations please see the Review 3’s rebuttal under the heading **@R3-A1** for simulations added to the main paper and **@R3-A2** for simulations added to the supplementary material. **Discussion on memory and computational complexity of algorithm and a simulation comparing runtime of algorithms** To provide clarity on this significant algorithmic aspect, an additional paragraph was included in the paper, accessible at **@R1-A2**. Furthermore, a comparison of the MARS algorithm's runtime with that of other algorithms was included, and the outcomes are presented in the Table in the attached PDF. The simulations indicate that MARS is faster than GIRO, which retains the entire history of rewards. For a more detailed explanations on the runtime comparisons, please refer to **@R1-A2**. **Revision of Literature Review and Clearer Articulation of Contributions** As advised by **@R4** we revised the related work section to provide a more precise explanations of existing algorithms along with their respective limitations and strengths. The main contribution of the paper which is *“MARS achieves logarithmic regret without using the function $\psi(\cdot)$. Hence it avoids sub-optimal choices of $\psi(\cdot)$”* was explained more in the section contribution. Please see **@R4-A1** for further details. **MARS performance for non-symmetric reward distributions** A new simulation was added to Supplementary material of the paper demonstrating MARS’s performance when reward distribution is Bernoulli (non-symmetrical). The result was shown in Figure 3 supplementary material in PDF file attached. The results indicate although the setup does not meet the symmetric assumption, yet MARS outperforms Vanilla UCB and Thompson sampling in this setup. Its performance is also comparable to that of BESA. Moreover, robustification of the approach for asymmetric rewards was added as a topic of future research inspired by the paper [*]. [*] Care, Algo, Balázs Csanád Csáji, and Marco C. Campi. "Sign-Perturbed Sums (SPS) with asymmetric noise: Robustness analysis and robustification techniques." 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. Please see **@R1-A1** for further details. We would like to express our gratitude once more for the time and effort. Pdf: /pdf/7792d5c95f4591c62c38d7884204bf4ab575909d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
EMMA-X: An EM-like Multilingual Pre-training Algorithm for Cross-lingual Representation Learning
Accept (poster)
Summary: This paper proposed EM-like multilingual pre-training algorithm using both supervised parallel data and unsupervised monolingual data to train an encoder model such that it can produce language agnostic representations for multiple languages. That is, embedding space of sentences of similar meanings will be the same across languages. The method uses an EM procedures that employs an GMM classifier, which is pre-trained with parallel data to produce a similarity rank between 2 input sentence, and then train the encoder model both with parallel data and ranks produced by GMM with unsupervised data. The method achieves higher scores across many inference, similarity and retrieval tasks compared to unsupervised XLM-R. Strengths: * The methodology seems original and new, especially for the new rank-based classifier. * The paper presentation and method description is clear and understandable. Weaknesses: There are multiple concerns with the evaluation and comparison with the method. * Despite utilize large-scale parallel data from multiple languages heavily in the pretraining stage of encoder and GMM classififer, the paper compares its method with unsupervised models like XLM-R. This make the comparison in main table 1 invalid and incomparable. This method is more semi-supervised with supervised pretraining and unsupervised finetuning stages. * There are some comparison with supervised models like LaBSE, but apparently the proposed method do not score well with supervised models. Moreover, the writting seems to suggest the paper is comparing unsupervised vs supervised but in fact this is supervised vs supervised. * XLM-R is not designed for language agnostic representation, but there are notable baselines that achieve the same language-agnostic object with simpler methodology: [LASER](https://github.com/facebookresearch/LASER) which use parallel data and [CRISS](https://arxiv.org/abs/2006.09526) which does not use parallel data. Not only they are not compared, they are also not mentioned in the paper. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: NA Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: No limitation mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. **Q1: Unfair comparison between EMMA-X and unsupervised method (XLM-R).** As a semi-supervised method, we have undertaken a comprehensive evaluation by comparing our results with four supervised methods (InfoXLM [1], HiCTL [2], LaBSE [3], and S-BERT [4]) in Tables 1, 2, and 3, leading to substantial improvements. The reason for including XLM-R [5] in our comparisons is that EMMA-X is initialized from XLM-R. **To ensure a fair assessment, we additionally retrained XLM-R, InfoXLM, HiCTL using the same parallel data as EMMA-X, and EMMA-X can outperform retrained baselines by 7.97% on average as presented in Table 1 with the symbol \ddag.** **Q2: Poor performance on EMMA-X when compared with supervised models (LaBSE).** We have not claimed that EMMA-X is an unsupervised method and explicitly emphasize the usage of supervised data to initialize the model in both Section 3.2 and Algorithm 1. As you said, EMMA-X is a semi-supervised method. EMMA-X can outperform two supervised baselines on all tasks and even outperform the state-of-the-art supervised baseline (LaBSE) on two tasks and all long-tail languags. It is hard to directly compare EMMA-X with LaBSE, as LaBSE uses a large pre-training dataset containing CC-100, Wikipedia, and a fine-selected 6B translation corpus that are much better than EMMA-X in both quantity and quality. However, the notable improvements exhibited by EMMA-X on WMT21QE [6], Mewsli-X [7], and LaREQA [8] in Table 2 and on low-resource languages of Tatoeba [9] and FLORES-200 [10] in Table 3 can further demonstrate the effectiveness of EMMA-X. **Q3: Lacking baselines and citations.** While LASER [11] is a notable baseline, we do not compare with it because we have chosen a stronger and more recent baseline, LaBSE. We have cited LASER in our paper from the perspective of its data **in line 235**, and we will add a more comprehensive discussion about LASER, LaBSE, S-BERT, and EMMA-X in the next version of our paper. EMMA-X and its baselines are encoder-only model designed for learning representations, while CRISS [12] is an unsupervised seq2seq model that mainly performs on machine translation. We will mention CRISS in the next version. **References:** [1] Chi Z, Dong L, Wei F, et al. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. [2] Wei X, Weng R, Hu Y, et al. On learning universal representations across languages. [3] Feng F, Yang Y, Cer D, et al. Language-agnostic BERT sentence embedding. [4] Reimers N, Gurevych I. Sentence-bert: Sentence embeddings using siamese bert-networks. [5] Conneau A, Khandelwal K, Goyal N, et al. Unsupervised cross-lingual representation learning at scale. [6] Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and André F. T. Martins. Findings of the WMT 2021 shared task on quality estimation. [7] Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, and Melvin Johnson. XTREME-R: Towards more challenging and nuanced multilingual evaluation. [8] Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. LAReQA: Language-agnostic answer retrieval from a multilingual pool. [9] Mikel Artetxe and Holger Schwenk. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. [10] Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. [11] Mikel Artetxe and Holger Schwenk. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. [12] Tran C, Tang Y, Li X, et al. Cross-lingual retrieval for iterative self-supervised training. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks the authors for the clarification. Given it is a fair comparison, the method is still not strong and outperforms consistently across benchmarks. Nonetheless I changed the scores to award the novelty of the work. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable feedback on our paper. Your suggestions have been helpful in guiding our revisions. We will take your advice into consideration and include more thorough comparisons between EMMA-X and baselines, which will serve to validate and enhance our research findings. Your guidance and support are highly appreciated. Thank you for helping us improve our work.
Summary: This paper proposes an EM-like pre-training algorithm called EMMA-X to learn cross-lingual representations. The authors unify semantic relation classification and universal representation learning into the framework. To this end, a GMM classifier and a cross-lingual encoder is jointly trained in the algorithm. Finally, the authors validate their method by conducting experiments on 12 cross-lingual tasks including inference, similarity, retrieval, and classification. Strengths: - The paper is well-structured and easy to follow. - The EM-like pre-training, which includes a semantic relation rank task, is somewhat novel. - The experiments are extensive, and the results generally indicate the proposed method is effective. - The geometric analysis section is interesting and informative, where the authors visualize the representations from four different models. Weaknesses: - Although the authors claim that EMMA-X can learn good representations with excessive non-parallel, the encoder, as far as I can see, is continue-trained on parallel sentence pairs with InfoNCE on parallel corpora for initialization. Therefore, this initialization weakens the claim. - The author does not verify the choice of a GMM classifier. At least, the author should give some simple intuition to the readers. - Semantic relation task is very important in the proposed framework. The authors claim that a binary separation is not good, but do not give further justification for the choices of N=4 semantic ranks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Consider removing the comma after each equation - Line 69: Why do we have to use L-2 normalization? There are multiple ways to perform normalizations, L-2 is just one of them. Therefore, I would rather say $f(\cdot)$ is a normalization, e.g., L-2. - Figure 1 in caption: "{y1, and y2, y3, ...}" -> {$y\_1, y\_2, y\_3, \cdots$} - In Figure 1, I assume $\gamma^{(x)} - \gamma^{(y_1)}$ is missing - The number of semantic ranks is set to 4. This sounds to me like a random selection. Did you do experiments to justify the choice? - The paper claims that the model can learn to learn cross-lingual universals with the aid of massive multilingual non-parallel data, but the cross-lingual encoder is continued trained on infoNCE on some parallel sentence pairs. I would assume this continue-training step is even more important than the method proposed. I would suggest the author justify the initialization by simply using the model weights of XLM-R. - Line 188-190: The number of sentence pairs is 3.2 billion, but what is the exact number of non-parallel sentences? That should also be mentioned in the main content of the paper. - Line 192: $\mathbb{R}^1$ -> $\mathbb{R}$ - Table 1: the authors should also use special characters to denote which results (BUCC and Tatoeba?) are obtained by zero-shot for EMMA-X. - Figure 2 and Table 4: according to the results, S-BERT performs very well (even better) than the proposed method, especially on long-tail languages. In this case, what would be the benefit of applying EMMA-X to those languages instead of simply using S-BERT? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not include a limitation or broader impact section, therefore not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments. **Q1: Initialization with parallel corpora weakens motivation.** The primary goal of EMMA-X is to acquire universal semantic representations for a multitude of languages. However, it is important to note that only a limited number of languages (4%) possess parallel data. Therefore, EMMA-X strives to leverage non-parallel data to extend language coverage for universal semantic representations and to enhance representation performance for those languages that have limited available resources. To provide additional evidence supporting our claim, we propose an ablation experiment in the General response to all authors. The results in Table 5 demonstrate that EMMA-X significantly improves the retrieval accuracy by 8.1 when compared to solely using parallel data. Moreover, on 16 long-tail languages, the retrieval accuracy further increases from 76.2 to 90.7, representing a substantial improvement of 19.0%. These findings support our claim that EMMA-X can indeed learn effective universal semantic representations from an abundance of non-parallel data. **Q2: Comparison between GMM classifier and other simple ones.** Please refer to the response to Q2 included in the “**General responses to all Reviewers**” part for details. **Q3: Missing justification for the choice of semantic similarity ranks.** Please refer to the response to Q3 included in the “**General responses to all Reviewers**” part for details. **Q4: The reason to use L2-Normalization.** We measure the semantic similarity between sentence via cosine similarity, which requires an L2-normalization to the output of embeddings of encoder. We also apply L2-Normalization following the common configuration in MoCo [1], HiCTL [2], and InfoXLM [3]. **Q5: The exact number of non-parallel sentences.** As shown in Appendix A, we use 800B of non-parallel data covering 94 languages, which is about 67 times larger than the parallel data. We will add the statistics to the main content of the paper in the next version. **Q6: The benefit of using EMMA-X rather than S-BERT, based on results from Figure 2 and Table 4.** Table 4 demonstrates that EMMA-X exhibits superior performance on low-resource languages compared to S-BERT [4], surpassing it by 69.5%, 5.0%, and 29.5% in three geometric criteria, respectively. Similarly, Figure 2 provides visual evidence supporting this claim. Specifically, EMMA-X showcases smaller language-specific clusters on four low-tail languages (tk, xh, tt, and eo) when compared to S-BERT. The abnormal results observed on the high KL-divergence, as depicted in Table 4 and Figure 2(k), can be attributed to the representation space for S-BERT deteriorating into a low-dimensional manifold (low isotropy score in Table 4 and Figure 2k), and different languages are not distributed uniformly across the whole representation space, which limits the expressive ability. **Q7: Typo and expression errors.** Thanks for pointing out the errors. We will proofread the content again to fix the wrong expressions and typos. **References:** [1] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning. [2] Wei X, Weng R, Hu Y, et al. On learning universal representations across languages. [3] Chi Z, Dong L, Wei F, et al. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. [4] Reimers N, Gurevych I. Sentence-bert: Sentence embeddings using siamese bert-networks. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response. All my questions have been appropriately answered now. I'll raise the recommendation score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your thoughtful message. I'm delighted to know that I could assist you in providing the information you needed. Your willingness to raise the recommendation score is truly generous and means a lot to me. Thanks again for your appreciation!
Summary: This paper proposes a new approach to learn cross-lingual representation learning as a universal alignment solution for any two languages without parallel data. The proposed approach resembles EM-like algorithm, which consists of a classifier to quantify the semantic similarity of two non-parallel sentences, as well as a sentence encoder that aligns the embedding representation of different languages. During the training process, the two components receive supervision from each other to reinforce the capability to recognize the accurate semantic similarity, where the paper also provides the theoretical justification to demonstrate the mutual influence being positive and effective. A new benchmark is also proposed that comprises four existing cross-lingual transfer tasks; the proposed approach is compared against other recent baselines, as well as ChatGPT. The empirical results show that new state-of-the-art performance can be obtained by this new approach with good margins. Strengths: - A new approach is proposed based on a novel EM-like algorithm with mutual inference and supervision by two components. Especially, the approach proposes to rank the similarity of a sentence pair, rather than optimizing the traditional binary positive/negative contrastive objective, which is indeed interesting and effective shown by the results. - State-of-the-art performance is achieved by this approach on the new benchmark consisting different types of existing cross-lingual transfer tasks, comparing against strong baselines such as XLM-R. - A theoretical analysis is provided, in addition to the superior empirical results to justify the effectiveness of the proposed approach. Weaknesses: - It might be over-claiming that the proposed approach operates to align the representation of different languages without parallel data, since Algorithm 1 clearly shows that parallel corpus is needed to bootstrap both the similarity classifier and the sentence encoder, and all experiments are based on this warm up that leverages parallel corpus. - There lacks more in-depth analysis regarding the ablation of the proposed approach. In particular: - There could be a simplified version of this approach without iterative optimization: we can first train a similarity classifier based on the proposed mixed-up method, and generate similarity rankings of sentence pairs on non-parallel data, which can then be used to optimize the contrastive objective directly for training the encoder. In this way, we can verify whether the improvement mostly comes from the fine-grained similarity ranking, and how much this iterative optimization mechanism contributes to the final performance. - There could also be another experiment that downgrades the similarity ranking to binary, as if the traditional positive/negative pair. Thus, it could show again that whether this EM-like algorithm is really necessary and how much this process could improve upon the plain contrastive objective. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the primary reason for adopting GMM as the similarity classifier? Can we use a normal classifier (e.g. using CLS on Transformers) and simply use their softmax probability for each rank? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It would be good to discuss how much degradation could happen if there is not enough parallel corpus for the warm-up. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive reviews. **Q1: Over-claiming EMMA-X operates without parallel data.** The primary goal of EMMA-X is to acquire universal semantic representations for a multitude of languages. However, it is important to note that only a limited number of languages (4%) possess parallel data. Therefore, EMMA-X strives to leverage non-parallel data to extend language coverage for universal semantic representations and to enhance representation performance for those languages that have limited available resources. **Q2: Adding an ablation study about the EM iterative optimization.** Thanks for your constructive advice. We apply an iterative paradigm to optimize the GMM classifier since the parallel data is not sufficient to cover so many languages to train a classifier and the semantic rank expectation from GMM classifier can be biased. The iterative optimization can help GMM classifier to learn better on long-tail languages. To provide additional evidence supporting our claim, we compare EMMA-X with a new baseline that applies a fixed GMM classifier to generate similarity ranks to optimize the encoder, referred to as "Phase 1 + Fixed GMM". The results are shown in the ablation experiments (Table 5) in "General Response to All Reviewers." From Table 5, we can see that EMMA-X improves retrieval accuracy by 6.6 compared with "Phase 1 + Fixed GMM". On 16 long-tail languages, the retrieval accuracy gap further raises to 10.8. This reveals that the iterative optimization mechanism can further improve the quality of the GMM classifier, especially for long-tail languages. **Q3: Adding an experiment that downgrades the similarity ranking to binary.** Please refer to the response to Q3 included in the “**General responses to all Reviewers**” part for details. **Q4: The reason to adopt GMM as classifier.** Please refer to the response to Q2 included in the “**General responses to all Reviewers**” part for details. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I am looking forward to the experiments that substitutes GMM with a normal classifier, which I think would provide good insights on this design choice during the EM iterations. --- Reply to Comment 1.1.1: Comment: We would like to express our gratitude for your support and encouragement! Your positive feedback means a lot to us, and we are thrilled that you view our work in such a positive light. We will continue to strive for excellence and make further contributions in future research and improvements.
Summary: The paper proposes EMMA-X, an EM-like approach to pretrain multilingual models. It learns the cross-lingual representation learning task and semantic relation prediction task within EM. They propose a new benchmark, XRETE, to evaluate the experiments with 12 cross-lingual tasks. The training involves two stages: 1) pretraining using parallel data (bitext) and 2) training with EM Framework with non-parallel data. The proposed approach outperforms the baselines. Strengths: - The proposed method outperforms other existing pre-trained models, showing the solid multilingual / cross-lingual ability. - The proposed method is backed by a strong theoretical analysis - The paper proposes a new benchmark from existing datasets to evaluate the cross-linguality. Weaknesses: - The main paper does not provide enough information about the model. I would suggest moving the model details from the Appendix to the main paper. - The benchmark does not provide the model parameters. The comparison may not be fair since the model sizes are different. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Suggestion** - Figure 1 is not easily understandable. It would be great if there is a clear step-by-step procedure in the caption. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No. The authors can have a section to address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive reviews. **Q1: Moving model details from Appendix to Main paper.** Due to page limitations, certain model details have been included in Appendix A. We will incorporate the model details to the main paper in the next version. **Q2: Lack of information about model parameters in the benchmark.** We have provided the details of our baseline model in Appendix D and will move this part to main pages in the revision. Accully, most of our baseline models share identical model parameters with EMMA-X, as they are also further trained based on the XLM-R [1] model. The only exception is LaBSE [2], which is further trained based on the BERT model, with the only difference being the size of the vocabularies. **Q3: Adding a step-by-step procedure in the caption of Figure 1.** In Section 3.3, we have already introduced EMMA-X step-by-step, and further illustrated the procedure through Algorithm 1. To provide readers with a better understanding of EMMA-X, we will include a step-by-step procedure in the caption of Figure 1. **References:** [1] Conneau A, Khandelwal K, Goyal N, et al. Unsupervised cross-lingual representation learning at scale. [2] Feng F, Yang Y, Cer D, et al. Language-agnostic BERT sentence embedding
Rebuttal 1: Rebuttal: **General Response to all Reviewers:** Thank all reviewers for their time and efforts. **Q1: Clarification about “Unsupervised” / “without parallel data” in EMMA-X.** In EMMA-X, we explicitly show the use of parallel data for initializing the model in Section 3.2 and Algorithm 1. The primary goal of EMMA-X is to acquire universal semantic representations for a multitude of languages. However, it is important to note that only a limited number of languages (4%) possess parallel data. Therefore, EMMA-X strives to leverage non-parallel data to extend language coverage for universal semantic representations and to enhance representation performance for those languages that have limited available resources. To further prove our claims, we provided an extra ablation study to show how each part in EMMA-X can affect performance and the results are shown in Table 5. **Q2: The reason to use GMM classifier.** Given the absence of direct supervision signals, we intuitively hypothesize that each semantic rank follows a Gaussian distribution. To effectively cluster each sentence pair into its corresponding semantic rank, we employ Gaussian Mixture Model (GMM) as the classifier. Moreover, the GMM classifier offers several advantages: 1. Addressing imbalanced data: The GMM classifier is particularly advantageous in handling imbalanced datasets. It can model clusters of varying sizes, ensuring that minority class data points are not overshadowed by the majority class. By allowing data points to contribute to multiple clusters with different probabilities, the GMM classifier mitigates the imbalance issue effectively. 2. Effective handling of outliers: With its soft assignment approach, the GMM classifier can effectively deal with outliers. Outliers are assigned low probabilities of belonging to any cluster, reducing their impact on shifting the cluster centers or covariances significantly. To further substantiate our claims, we plan to include an additional experiment that compares the performance of the GMM classifier with that of a normal classifier in the forthcoming version. **Q3: Choice of semantic similarity rank.** The selection of the semantic similarity rank was made heuristically, and we chose a rank of 4 based on the following two reasons: 1. Addressing data imbalance: A fine-grained similarity rank allows for a smoother distribution of semantic ranks. In the case of binary classification of semantic relations, there tends to be a significant imbalance between negative (non-semantically similar) and positive samples. By employing a fine-grained semantic rank, we can better distribute sentence pairs originally classified as negatives into different ranks, thus mitigating the data imbalance issue. 2. Learning complexity consideration: Conversely, defining a higher number of semantic ranks poses challenges for both the GMM classifier and the cross-lingual encoder to learn. Therefore, we heuristically arrived at the choice of N=4 as it represents a good trade-off between addressing the data imbalance problem and managing learning complexity. To provide further support for our approach, we are currently conducting experiments on varying the number of semantic ranks, and we intend to include these findings in the next version of our paper. Pdf: /pdf/8b2dfce80ec6c08f7153fac2888cbada16dec12e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes to apply the EM framework to realize unified sentence representation learning with non-parallel multilingual data. This framework consists of two modules, a GMM classifier and a cross-language encoder, which are responsible for semantically related classification and cross-language unified representation. However, this is achieved on the premise that the parallel multilingual data initializes the models. In addition, the author conducted a theoretical analysis and forms a new benchmark, based on which the effectiveness of the framework was proved. Strengths: The paper applies the EM algorithm to realize the optimization of the multilingual unified representation parallel-trained model under non-parallel data. Weaknesses: 1. Why choose the GMM model as the Semantic Relations model? 2. Why form a XRETE benchmark, what are its innovations and necessity? 3. This article introduces monolingual data from a continuous perspective, so what are its advantages compared with discrete methods(e.g. Back-translation) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why choose the GMM model as the Semantic Relations model? 2. Why form a XRETE benchmark, what are its innovations and necessity? 3. This article introduces monolingual data from a continuous perspective, so what are its advantages compared with discrete methods(e.g. Back-translation) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments. **Q1: The reason to use GMM model as Semantic Relation Model.** Please refer to the response to Q2 included in the “**General responses to all Reviewers**” part for details. **Q2: The reason to form an XRETE benchmark.** The formation of the XRETE benchmark is driven by the following reasons: 1. Inclusion of recent, dedicated, and challenging benchmarks: XRETE incorporates more up-to-date benchmarks that specifically focus on universal sentence representations. These newly added benchmarks, such as Legal topics in MultiEURLEX [1], Review topics in Multilingual Amazon Review corpus [2], WMT Quality Estimation [3], and Americas NLI [4], offer increased difficulty in evaluating the performance of models. 2. Coverage of more low-resource languages: XRETE goes beyond its predecessors by encompassing a broader range of low-resource languages. For instance, it includes 10 Indigenous languages of the Americas, which were not part of the XTREME [5] or XGLUE [6] benchmarks. This expansion ensures that the XRETE benchmark provides a more comprehensive evaluation of models' capabilities across diverse linguistic backgrounds. **Q3: Advantages of EMMA-X compared with Back-Translation.** EMMA-X offers several advantages over discrete methods like back-translation: 1. Faster parallel corpus construction: EMMA-X outperforms back-translation in constructing parallel corpora at a faster pace. The process of generating supervision signals in EMMA-X involves predicting the semantic relation rank between two sentences, which is a more efficient approach compared to back-translation's auto-regressive generation of pseudo-parallel sentences. 2. Higher quality parallel corpora on low-resource languages: In the context of low-resource languages, EMMA-X excels in constructing parallel corpora with higher quality compared to back-translation. Back-translation may yield erroneous results for some languages, or unavailable for some languages such as pa, ps, and others. Conversely, EMMA-X retrieves real sentences from a monolingual, low-resource corpus, ensuring the production of more reliable and accurate parallel data. 3. Justification of semantic relation rank predictions during pre-training: EMMA-X uniquely unifies the prediction of semantic relation rank with the learning process of universal semantic representations within an Expectation-Maximization (EM) framework, which allows for the justification of semantic relation rank predictions during pre-training. In contrast, discrete methods like back-translation often rely on fixed translation models for pseudo-parallel corpus construction, leading to limited improvement in sentence representations' quality. **References:** [1] Chalkidis I, Fergadiotis M, Androutsopoulos I. MultiEURLEX--A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. [2] Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. The multilingual Amazon reviews corpus. [3] Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and André F. T. Martins. Findings of the WMT 2021 shared task on quality estimation. [4] Ebrahimi A, Mager M, Oncevay A, et al. Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models in truly low-resource languages. [5] Hu J, Ruder S, Siddhant A, et al. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. [6] Liang Y, Duan N, Gong Y, et al. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation.
null
null
null
null
null
null
PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning
Accept (poster)
Summary: The paper studies the generalization and plasticity of deep reinforcement learning (RL) agents. The paper's primary contribution is a combination of the existing Sharpness-Aware Minimization (SAM) and Resetting mechanisms. The main insight is that combining SAM+Resets provides additive benefits: the former induces feature sparsity (and hence could be argued to improve generalization), while the latter yields more active units for the last layers (and hence could be argued to increase network plasticity). Moreover, the authors conduct an analysis on synthetic data and arrive at the conclusion that SAM improves adaptability to input changes, while resets mostly improve the ability to adapt to new targets. When SAM+Resets are applied to Data-Efficient Rainbow (DER) and Data-regularized Q (DrQ) on Atari 100k, the combination gives consistent improvements. Strengths: The main strength of the paper is a thorough empirical investigation: - The majority of claims are supplemented with supporting evidence: for example, the authors report sharpness metrics (such as the trace of the Fisher matrix and the largest Hessian eigenvalue) to demonstrate that SAM increases the flatness of the solution (the result is non-trivial since, unlike in supervised learning, the loss surface is always changing in RL). - The paper presents ablations for the design choices when applying SAM to RL algorithms (for example, whether to apply SAM for the whole network or for a subset of layers in DER or whether to apply SAM for both actor and critic networks in DrQ or individually). - The SAM+Resets combination is thoroughly compared to other methods that were proposed to address plasticity loss (such as layer normalization and concatenated ReLUs). Weaknesses: There are several weak spots that prevent the reviewer from assigning a higher score: - The SAM+resets combination is applied for the DER algorithm in Atari 100k, which is a weaker baseline than the SPR algorithm used in the original resets paper; the choice makes the reviewer wonder if the addition from SAM would be smaller for SPR and other more advanced algorithms. - Figure 4 contains an encouraging demonstration of improvements in replay ratio scalability for DrQ on Atari-100k from adding SAM on top of resets. However, the paper does not have scalability results for the other settings (DrQ on DMC / DER on Atari 100k / SAC on DMC from proprietary states) making the reviewer question the generality of the insight. There are also several minor weaknesses: - Table 1 contains placeholder performance values 0.xxx for DER + CReLU - Putting in bold the results for SAM + Reset in Table 2 might mislead a limited-attention reader. The difference between SAM + Reset 516 (441, 590) and Reset 514 (434, 590) is insignificant and should not be highlighted. Addressing the outlined limitations might increase the score. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The paper doesn’t study continuous control from proprietary states — what was the motivation for excluding the setting? The combination of preliminaries in Section 3 and the experimental details in Section 5.1 is confusing: first, the authors give background about the Rainbow and Soft Actor-Critic algorithms, while later use their modifications, DER and DrQ, for actual experimentation. The reviewer suggests describing directly DER and DrQ. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: While the paper presents evidence about the effects of SAM+Resets on network adaptability, an explanation is lacking: why SAM and Resets enhance input and label adaptability and, for example, not visa versa? Likewise, the reviewer finds intriguing the result in Figure 2a (bottom left): on their own, both resets and SAM decrease the fraction of active units, while their combination yields a strong increase. Why? Diving deeper into explaining the phenomena would increase the significance of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer mzhA, Thank you for your constructive feedback. To address your concerns, - We've experimented with SAM on advanced algorithms. - We provide an explanation of the generality of our insight. - Rectified .xxx in Table 1. - The presentation in Table 2 will be revised to avoid misinterpretations. - Explained the rationale behind excluding state-based environments. - We plan to improve clarity between DER and DrQ. Please let us know if you have any further comments or feedback. We will do our best to address them. > **Question 4.1:** The reviewer wonders if the addition from SAM would be smaller for SPR and other more advanced algorithms. To clarify, our method primarily builds upon the DrQ, as detailed in Supplementary Section 5. Contrasting with SR-SPR, which couples DrQ with SSL and Reset, our approach couples DrQ with SAM and Reset. We have tried SR-SPR + SAM and found that the performance gains were indeed marginal. This can likely be attributed to the inherent generalization enhancements from the SSL objectives already present in SPR. Additionally, our SAM + Reset combination has been rigorously tested with advanced algorithms like BBF and SimTPR on the Atari-100k benchmark. The results, showcased in Table A.2 (attached pdf), confirm the robustness and efficacy of this synergy. In our revised manuscript, we'll accentuate these findings, emphasizing the distinctions and comparative benefits of our approach. > **Question 4.2:** The paper does not have scalability results for the other settings (DER on Atari 100k / DrQ, SAC on DMC) making the reviewer question the generality of the insight. For DER on Atari-100k: DER utilizes shallow and wide architecture and focuses on amplifying generalizable representations. Consequently, testing scalability on DER might detract from its foundational goal of enhancing the generalization of larger networks. For DrQ and SAC on DMC: Initially, we surmised that DMC's limited generalization advantages stemmed from lower input non-stationarity. Yet, our DMC-GB experiments on Table A.3 (attached pdf) and subsequent comparisons between Atari and DMC revealed that DMC indeed possesses a greater input non-stationarity. In addition, DMC employs rigorous reset protocols: DrQ resets almost 50% of its network, and SAC resets it entirely, as detailed in [1]. These comprehensive resets inherently counter input non-stationarities, consistently restoring the system to an initial state. As [1] has demonstrated the efficacy and scalability of such resets, we believe this further validates the relevance and breadth of our insights. Our further tests included scaling SAM+Reset on SimTPR, a 30-layer CNN model. As presented in Table A.1, the synergy of SAM and Reset is evident at a replay ratio of 2 and retains its potency when increased to 4. Recent findings from BBF also highlight the harmonious interplay between generalization (via strong L2 regularization and Self-Supervised objectives) and plasticity (via resets), across replay ratios from 2 to 8. In light of these, we remain confident in the broader applicability of our insights. [1] Sample-Efficient RL by Breaking the Replay Ratio Barrier, D’Oro et al., ICLR 2023. > **Question 4.3:** Table 1 contains placeholder performance values 0.xxx for DER + CReLU Thank you for pointing out the placeholders. The updated values for DER + CReLU and DER + LayerNorm, previously marked as “0.xxx,” are provided in Supplementary Section 6, highlighted in red. We apologize for any inconvenience this caused. > **Question 4.4:** Putting in bold the results for SAM + Reset in Table 2 might mislead a limited-attention reader. We agree that emphasizing the results for SAM + Reset could mislead readers given the minimal difference with just the Reset. We will present the results without bolding. > **Question 4.5:** The paper doesn’t study continuous control from proprietary states — what was the motivation for excluding the setting? We emphasized image-based environments due to their inherent non-stationarity and the pronounced distribution shifts. In these tasks, even minor agent actions can drastically alter pixel distributions, demanding robust generalization. Conversely, state-based scenarios generally present more consistent input distributions, making them less challenging from a generalization perspective. We'll elucidate this choice further in our revised manuscript for clarity. > **Question 4.6:** The combination of preliminaries in Section 3 and the experimental details in Section 5.1 is confusing. The reviewer suggests describing directly DER and DrQ. Thank you for pointing out the potential confusion. DER and DrQ, while rooted in similar foundational algorithms, differ in aspects like model architecture and data augmentation. In our revised manuscript, we'll refine the presentation, explicitly highlighting these differences to ensure a clear understanding. > **Question 4.7:** Why do SAM and Resets enhance input and label adaptability and not visa versa? In Figure 2a, both resets and SAM decrease the fraction of active units in the head, while their combination yields a strong increase. Why? On the distinction in adaptability enhancements: SAM, by its design, results in sparser features, which may not readily allow for flexibility in adapting to shifting label dynamics. Reset, on the other hand, appears to naturally assist in label adaptability by preventing overfitting to evolving label relationships. However, intuitively, its precise contribution to input adaptability is less evident. Regarding the combined effect observed in Figure 2a: SAM's sparsity might necessitate a more active head unit to decipher diverse feature combinations, and when paired with Resets, this effect amplifies, leading to an overall rise in active units for the head. However, we acknowledge that this is a hypothesis, and further investigations are needed to solidify this understanding. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. In response to the rebuttal, I have updated my score to 7. I am eager to see follow-up works that even further deepen the understanding of the differences between input and target non-stationarities in RL. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and for updating the score. We appreciate your insights and look forward to further exploring this topic in our future work.
Summary: This paper presents a new method to enhance sample efficiency in reinforcement learning, by integrating two existing techniques: sharpness-aware minimization (SAM) and weight resetting. It shows that SAM and resetting work in a complementary way where SAM addresses input adaptability and resetting addresses label adaptability. Experiments on Atari100K and DM control demonstrate the effectiveness of the proposed method. Strengths: - This paper is clear, well-written, self-contained, and enjoyable to read. - While the proposed method simply combines two existing approaches, such a combination is novel and well justified by the synthetic experiments and ablation studies. - The experiments are comprehensive and well executed. Weaknesses: - The authors mention that the relatively small improvement in DMC might be due to the reduced visual variety in DMC. The [DMControl Generalization Benchmark](https://github.com/nicklashansen/dmcontrol-generalization-benchmark) augments DMC with rich visual variety, which could be a good testbed to validate the claim. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Apart from the points raised above, I have some questions and comments, and would like to hear the authors' feedback: - Table 1: Some entries missing ("xxx" in the table). - Line 128: The sentence ("..., which encourages to Here, ...") is confusing. - I notice the authors have cited [10] in the related works. I am wondering if the authors have tried the soft resetting strategy on Atari, since it has been shown to be effective. - It would be better to add error bars in Figure 1(right), Figure 4, and Figure 5. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations of the proposed method have been adequately discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer dLhs, Thank you for your constructive feedback. To address your concerns, - We explored the DMControl Generalization Benchmark (DMC-GB). Our findings from this extended study suggest that DMC exhibits a large degree of input non-stationarity than we initially presumed. - We rectified discrepancies in Table 1, improved clarity in various sections, and enhanced our figures. - In a broader perspective, we've also conducted rigorous tests in synthetic environments and experimented with SAM + Reset on more advanced algorithms. Further detail is described in our general response. Please let us know if you have any further comments or feedback. We will do our best to address them. > **Question 3.1:** The authors mention that the relatively small improvement in DMC might be due to the reduced visual variety in DMC. The DMControl Generalization Benchmark augments DMC with rich visual variety, which could be a good testbed to validate the claim. Thank you for bringing up the DMControl Generalization Benchmark (DMC-GB). In contrast to the traditional approach of DMC-GB—training on clear inputs and testing on noisy ones—we opted to train and test on noisy inputs. This approach aligns with our intention of illustrating the strength of different generalization techniques against input non-stationarity. Our experimental results in Table A.3 (attached pdf) indicate that when paired with a reset, SAM performs comparably to Adam. Without the reset, however, SAM surpasses Adam. An intriguing observation was that the DMC environment displayed a higher degree of input non-stationarity compared to Atari, even without the background noise introduced in DMC-GB. This challenges our initial hypothesis, suggesting that handling severe input non-stationarity might be better suited for DMC. Then, if DMC inherently demands addressing input non-stationarity, why does the reset strategy perform so well? The context lies in the different reset strategies between environments. In the original reset paper referenced as [1], DMC employs an aggressive reset strategy, reinitializing nearly 90% of its network for DrQ and 100% of its network on SAC. Instead, in environments like Atari, 50% of the network is reinitialized. This aggressive approach, highlighted in [1] effectively manages input non-stationarity, potentially overshadowing the benefits of SAM when paired. Our research primarily focuses on resetting the final few layers. However, the efficacy of broad resets in tackling input non-stationarity is evident in our synthetic experiments, where Reset (B, H) showcases effectiveness in Figure A.1 (attached pdf) While these aggressive resets worked well for DMC, we believe this strategy is not always optimal as evidenced by the results of Reset (B, H) from Atari in Table A.1. Extensive reinitialization forces the model to relearn, and as networks grow larger, this becomes more challenging. Thus, we believe blending different strategies to combat input non-stationarity is essential to further enhance the performance of both DMC and DMC-GB. [1] The Primacy Bias in Deep Reinforcement Learning., Nikishin et al., ICML 2022. > **Question 3.2:** Table 1: Some entries missing ("xxx" in the table). The updated values for DER + CReLU and DER + LayerNorm, previously marked as “0.xxx,” are provided in Supplementary Section 6, highlighted in red. We apologize for any inconvenience this caused. > **Question 3.3:** Line 128: The sentence ("..., which encourages to Here, ...") is confusing. Thank you for pointing this out. We found out that the sentence was inadvertently truncated. The correct sentence should read: "... which encourages a model parameter $w$ to find a smoother region of the loss landscape.” We will ensure it is corrected in the revised manuscript. > **Question 3.4:** I noticed the authors have cited [10] in the related works. I am wondering if the authors have tried the soft resetting strategy on Atari since it has been shown to be effective. Thank you for highlighting the soft resetting strategy referenced in [10]. Indeed, we have experimented with the soft reset approach on the Atari benchmarks, as detailed in our appendix section 5. Our findings indicate that when the replay ratio is set at 2 (the standard setup in our primary experiments), the use of the soft reset (i.e., Shrink & Perturb) tends to degrade performance. However, as we increment the replay ratio, the advantages of the soft reset became notably evident, even surpassing the results of well-known state-of-the-art algorithms. > **Question 3.5:** It would be better to add error bars in Figure 1(right), Figure 4, and Figure 5. We will incorporate error bars in Figures 1(right), 4, and 5 in the revised manuscript. --- Rebuttal Comment 1.1: Title: Feedback Comment: Thank the authors for the detailed reply and additional experiments. I have one comment regarding Question 3.1 and expect the authors can take it into consideration when preparing the next version. The results show that "SAM + Resets" performs similarly to or worse than "SAM" on DMC and DMC-GB. The authors now give a new explanation, attributing it to the "aggressive reset strategy". If the benefits of SAM are potentially overshadowed by resetting on DMC, then the authors should make it clear in the paper (especially abstract) and modify the claims like "Extensive empirical studies on the Atari-100k and DeepMind Control Suite benchmarks demonstrate that this combined usage yields sparse, generalizable features and a dense, plastic policy.". --- Reply to Comment 1.1.1: Comment: We appreciate your feedback regarding Question 3.1. We agree with your observation and the importance of clarity around the impact of the "aggressive reset strategy". We will ensure that this clarification is adequately reflected in the revised manuscript.
Summary: This work studies the role of generalization and plasticity in sample-efficient deep RL. This paper proposes sharpness-aware minimization (SAM) to improve generalization in RL. And it provides details on how to use SAM with deep RL algorithms like SAC and Rainbow. Empirical evaluation shows that combined usage of SAM and periodically resetting the last few layers of the network improves sample efficiency in the Atari-100k benchmark. On the other hand, adding SAM does not provide any benefit over resetting in Deepmind Control Suite. Strengths: This work tackles the critical problem of sample efficient reinforcement learning. Sample efficiency is particularly important for applications where exploration is risky and expensive, such as healthcare. The paper proposes using SAM to improve the generalization of deep RL algorithms. SAM has not been used in RL before, so the details regarding the usage of SAM in RL are valuable. The paper attempted to study the role of generalization and plasticity in a simple continual supervised learning problem. This problem does not have the confounders found in RL, so it can improve our understanding of the underlying phenomena. The results on the Atari-100k benchmark show that combining SAM with resets improves the performance over just using SAM or resets. This exciting result shows that SAM can improve the sample efficiency of deep RL. Weaknesses: This paper contains interesting ideas and promising experimental results. The paper's central claim is that generalizability and plasticity constitute different roles in improving performance (lines 6-7). However, the experiments performed in the paper do not adequately test this claim. There are significant issues with the experiments in Sections 4 and 5.2. The paper claims that "we find that incorporating both generalization and plasticity is crucial for improving the model's ability to adapt to new data (Section 4)" (lines 56-57). Unfortunately, the experiments in Section 4 have the following major problems: 1. The paper claims that generalization and plasticity constitute different roles. To represent generalization, it used SAM, and to represent plasticity, it periodically reset the last few layers. Then two experiments were performed, one where the input changes over time and the other where the labels change. The results show that resetting the last few layers does not help when the input changes. But that is not surprising. Resetting the last few layers only injects plasticity into the last few layers. However, the conclusions drawn from this result are too general. The paper says that plasticity does not help when the input distribution changes, but we can only draw this conclusion if the paper contains results for methods that inject plasticity into the whole network. The experiments in Section 4 should include methods that inject plasticity into the full network. This means including methods that reset the last few layers and use CReLUs in the full network, or reset the last few layers and use LayerNorm after all the layers, or using selective reinitialization methods like Continual Backprop or ReDo. From the current results, it is unclear if SAM provides any benefits over plasticity injecting methods. 2. The experiments were only performed for five random seeds (line 73 of the appendix), which raises questions about the statistical significance of these results. These experiments are performed on CIFAR, so they are probably not computationally expensive. I think it should be possible to perform more runs, say 30, to increase the confidence in these results. The paper also needs to report how the confidence interval is calculated in Figure 1. 3. The hyperparameters are not properly tuned for the input adaptation experiment. The experiment used a learning rate of 0.001, which was the smallest one that was tested. Even lower learning rates need to be tested. Similar problems exist with the experiments in Section 5.2. 1. Again, the resetting methods used in these experiments only inject plasticity in the last few layers. The effect of injecting plasticity in the full network needs to be tested before including SAM. 2. The paper proposes L2 regularization as a baseline for SAM. However, it is not tested in the experiments. Experiments with DER show that L2 performs better than SAM, so it is puzzling why L2+Reset is not compared to SAM+Reset. L2+plasticity-preserving-methods should be compared with SAM+Reset. Besides these major concerns, the clarity of the writing can be improved. For example, the sentence on line 128 needs to be completed. Spending more space to explain section 3.2 will also be valuable for the community. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The paper uses two terms, "adaptability" and "plasticity." To me, these words have very similar meanings. Plasticity means the ability to learn from new data, and adaptability means being able to fit new data. Can you please explain in what sense are you using these words and what is the difference between the two? If they have a similar meaning, using just one word in the paper might be better. Currently, the use of these two words causes confusion in section 4. Maybe in the conclusion of Section 4, it is better to say that SAM improves plasticity when there is input non-stationarity and resetting the last few layers improves plasticity when the labels change. 2. What is the confidence interval reported in Figure 2? Is it the 95% bootstrapped confidence interval? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The key limitation of the paper is that the experiments did not test the central claims of the paper. To claim that both plasticity and generality are important for improving adaptability, methods that inject plasticity into the whole network should be tested. Including the following baselines in Section 4 with proper hyper-parameter turning and more random seeds will significantly improve the completeness of the claims. 1. CReLU 2. LN 3. L2 (properly tuned) + Reset 4. CReLU + Reset 5. LN + Reset Similarly, the following baselines in Section 5.2 need to be added. 1. CReLU + Reset 2. LN + Reset 3. L2 + Reset 4. L2 + best combination of CReLU, LN, Reset 5. SAM + best combination of CReLU, LN, Reset The paper contains interesting ideas and promising results. However, the current empirical evaluation is incomplete and does not support the central claim of the paper. Unfortunately, I can not recommend accepting the paper in its current form. But, I'm willing to change my score if the authors can include all the relevant baselines for experiments in Sections 4 and 5.2 and show that the conclusions still hold. EDIT: I have update my score based on the new experiments and changes in the main message of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer GE2W, We appreciate your constructive guidance. Based on your feedback, - We've refined our Synthetic experiments. - We explored the synergies of input and label adaptation techniques in RL experiments. - Line 128 has been corrected for clarity. - We plan to provide a detailed explanation of SAM in Section 3. - We clarified the terminology focusing on “plasticity”. Please let us know if you have any further comments or feedback. We will do our best to address them. > **Question 2.1:** The experiments in Section 4 have the following major problems: - The hyperparameters are not properly tuned. - The experiments in Section 4 should include methods including {CReLU, LN, L2 (properly tuned) + Reset, CReLU + Reset, LN + Reset}. - Only five random seeds were used. - Needs to report confidence intervals. Thank you for your observations. We've undertaken a rigorous revision of our synthetic experimental setup: - Tuning Learning Rate & Weight Decay: Recognizing the criticality of tuning for learning rate and weight decay in non-stationary datasets, we've executed an exhaustive search across {0.1, 0.01, 0.001, 0.0001, 0.00001} for both. We decided to incorporate weight decay as a base setup as we found it was important to reduce the variance of each individual run. - Broadening Baselines: We broadened our benchmarks to include {LN (B+H), LN (B), SAM (B+H), ReDO (B+H), CReLU (B+H), CReLU (H), Reset (H)}, with B and H denoting backbone and head network parts. For ReDO we tuned the dormant threshold from {0.2, 0.1, 0.05, 0.02, 0.01}. In addition, we further investigated the synergistic interactions of these baselines. - Increasing Random Seeds: We increased the number of random seeds from 5 to 30. - Reporting Confidence Interval: For each method, we report the 95% confidence interval. Figure A.1 (attached pdf), reveals a clear bifurcation of algorithms excelling in either input adaptation or label adaptation. For input adaptation, LN(B) and SAM(B+H) demonstrated prominent efficacy. Conversely, for label adaptation, Reset (H) and CReLU (H) were effective. Exploring the synergies of combined approaches offered interesting insights. The best performance across both scenarios was achieved either by strategically blending methods that exhibited proficiency in both input and label adaptation or by incorporating all of the methodologies. We hope these comprehensive experiments address your concerns and illustrate the robustness of our message. > **Question 2.2:** Similar problems exist with the experiments in Section 5.2. - The baselines used in Synthetic Experiments need to be added. - L2+plasticity-preserving-methods should be compared with SAM+Reset. We extended our experiments in the context of a sample-efficient RL setup, specifically with DrQ on the Atari-100k benchmark. From Table A.1(attached pdf), we observed a pronounced synergy when mixing input adaptation with label adaptation techniques. On the other hand, when we concentrated exclusively on either input or label adaptation, we only observed marginal enhancements. For L2 regularization, we conducted careful tuning both individually and in combination with other methods, but its performance was found to be suboptimal. Therefore, we prioritize other methods for exploration. Regarding the specific combinations like Reset(B+H) and LN(B+H) suggested by the reviewer: given the time constraints, we didn't delve into these. However, we are willing to explore these avenues in our revised manuscript. To conclude, our extended experiments reaffirm our initial hypotheses, and we're excited to further refine our work based on the insights provided. > **Question 2.3:** The sentence on line 128 needs to be completed. We found out that the sentence was inadvertently truncated. The complete sentence should be as follows. “... which encourages a model parameter $w$ to find a smoother region of the loss landscape.” We will ensure it is corrected in our revised manuscript. > **Question 2.4:** Spending more space to explain section 3.2 will also be valuable for the community. Thank you for your suggestion. We understand that Sharpness Aware Minimization (SAM) might be less familiar to the reinforcement learning community. We will expand on Section 3.2 in the revised manuscript to ensure clarity of the methodology. > **Question 2.5:** To me, "adaptability" and "plasticity" have very similar meanings. If they have a similar meaning, using just one word in the paper might be better. We concur with your observation that "plasticity" has been ambiguously defined in the reinforcement learning literature. However, upon reviewing the definitions from various literature from continual learning [1,2] and neuroscience [3,4], it does appear that using "plasticity" in the same context as "adaptability" would be more natural. To clarify the definitions: - We redefine "Plasticity" as the model's ability to adapt. - "Input Plasticity" refers to the model's capability to adjust to input non-stationarity or changes in $p(x)$. - "Label Plasticity" refers to the model's capability to adjust to label non-stationarity or shifts in $p(y|x)$. - We plan to retitle our paper as "Enhancing Input and Label Plasticity for Sample Efficient Reinforcement Learning." We believe this change in terminology and definition will provide more clarity and prevent confusion throughout the paper. [1] A study on the plasticity of neural networks. Berariu et al., arXiv 2021 [2] Continual backprop: Stochastic gradient descent with persistent randomness. Dohare et al., arXiv 2022. [3] Neuroplasticity. Costandi, Moheb., MIt Press 2016. [4] Neuroplasticity: New biochemical mechanisms. Reznikov et al., Springer Healthcare, 2020. > **Question 2.6:** What is the confidence interval reported in Figure 2? It is the 95% bootstrapped confidence interval. We will clarify this in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your reply and for performing the additional experiments. The new experiments help with supporting the key claims of the paper. The new experiments and more runs for the experiments in Section 4 support the claim that SAM adds something that none of the existing solutions to the loss of plasticity provide. The new experiments also alleviate most of my concerns about the empirical rigour of the experiments. I suggest authors also include an experiment in the final manuscript that shows what happens when individual components are removed from the "ALL" baseline in Figure A.1. In other words, it would be good to also have the performance of ALL - SAM(B+H), ALL-Reset(H), ALL-CReLU(H), ALL-LN(B) in figure A.1. Of course, these results can go in an appendix. The new experiments in Section 5.2 are more mixed. The difference between the performances of SAM(B, H) + Reset(H) and LN(B) + Reset (H) is not statistically significant. Similarly, there the difference between the performances of ALL (the last row of Table A.1) and ALL - SAM (B, H) is not statistically significant. So, it is unclear if SAM provides any benefit over existing plasticity-preserving methods. With that said, I agree with the authors that the results show that techniques that enhance input and output plasticity are complementary, and we can obtain the best results by combining methods that address both of these independently. As a side note, I think the authors should not bold methods in tables that are not statistically better than all other methods; I think it is a little misleading. There are probably better ways to show which methods perform the best, maybe using heatmaps. I thank the authors for checking the definition of plasticity in the continual learning and neuroscience literature. I like their plan to use terms like 'input plasticity' and 'output plasticity' and change the paper's title to reflect their main message more adequately. The empirical results support the new title, which was not the case for the previous title and main message. I want to point out that changing the terminology and main message (that input and output plasticity play complementary roles) requires significant rewriting. For example, parts of the abstract that currently discuss separate roles of generalization and plasticity have to be changed to discuss complementary roles of input and output plasticity. However, I feel confident that the authors will do a good job of adequately rewriting the paper to reflect the key message. Based on the new results that improve the empirical rigour of the experiments and the change of the main message (as reflected by the new title), I have updated my score to accept the paper. --- Reply to Comment 1.1.1: Comment: Thank you once again for your insightful feedback and rigorous examination of our manuscript. Through the extensive experiments based on your recommendations, our fundamental message on the intertwined roles of input and output plasticity has been crystallized. This clarity has deepened our understanding and underscored our primary message. Consequently, we are eager to revise our paper's narrative, accentuating the complementary nature of input and label plasticity. In parallel, these extensive experiments have provided a more nuanced perspective on SAM. While it certainly presents value, its impact may not be as profound as we initially stated. We recognize the nuance you've highlighted and commit to presenting SAM in a more toned-down manner. Your feedbacks were invaluable to our work, and we are dedicated to refining our manuscript. Warm regards, Authors of Submission 12455.
Summary: Sample efficiency in RL is desirable to reduce computational and data collection costs, and is particularly critical to data-limited domains. While off-policy methods can improve sample efficiency by training multiple passes over the same data, it faces challenges due to overfitting, which makes it harder for the model to adapt to new data. This paper argues that to address the problem, it is important to tackle both the generalization and plasticity of the model, and proposes a method that achieves this. The method uses Sharpness-Aware Minimization (SAM) to improve the model’s generalization, and a reset mechanism that periodically reinitializes the final layers of the model to inject plasticity. Using a synthetic supervised learning experiment, the authors show that SAM helps the model better adapt to new inputs, and the reset mechanism helps it to adapt better to new labels. These improvements in adaptability enables off-policy methods to better utilize multiple updates on data, improving sample efficiency. The authors evaluate their method on Atari 100k and DMC-M, applying them to the DER and DrQ learning algorithms that are designed for sample efficient learning. The results show that both DER and DrQ gain significant performance improvements when equipped with SAM + reset, compared to other methods that aim to improve generalization or plasticity alone. The authors further perform ablation studies. Notably, they find that SAM + CRelu performs almost as well as SAM + reset, which suggest that it is the combination of improving generalization and adaptability that leads to the performance gains, rather than specific synergies between SAM and reset. This highlights that the value of their contribution, which does not lie in merely combining two known techniques, but in discovering a synergistic dynamic between generalization and plasticity for sample efficient RL. Strengths: - The writing is clear, and figures are intuitive and well-presented. The authors' claims are modulated and supported by the experiments. The logic of the paper flows very well; many questions that I had while reading were quickly addressed in subsequent sections. - The paper studies an important topic that has broad-ranging technical and environmental impacts. Sample efficient learning is not only an interesting technical problem, but also directly contributes to decreasing the environmental footprint of our field. In the age of increasingly large models trained with increasingly large amounts of data and compute, this is a critical issue. - The solution that the authors propose is simple yet elegant, and can be widely applied as it requires little change to model architecture. Furthermore, beyond showing strong empirical results for their approach, the authors also provide a hypothesis on how the underlying principles of generalization and plasticity achieve a synergistic dynamic, and support it with synthetic experiments. Weaknesses: - Both SAM and the reset mechanism are known techniques, and applying them in conjunction to achieve good results is not by itself sufficient for novelty. However, in the ablation studies, the authors also demonstrate that SAM + CRelu performs almost as well as SAM + reset. This strengthens their claim that it is the synergistic relationship between generalization and plasticity, not the specific method to improve these properties, that leads to the performance gains, which is novel. Yet, just testing one alternative combination does not seem sufficient to support the claim. - The experiments that study the impact of generalization and plasticity on the model's learning dynamics is synthetic and on supervised learning, which may not be an accurate approximation of the dynamics faced during RL. For example, uniform-randomly changing labels may not accurately simulate the moving targets problem faced in deep Q-learning. - The base learning algorithms that the authors evaluate with, DER and DrQ, are not state-of-the-art for data-limited RL. For example, EfficientZero reports achieving a normalized median of 1.090 and normalized mean of 1.943 on Atari 100k, which is better than the results shown in this paper. - Several fields appear to be missing in Table 1? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Have you tested other combinations of methods to improve generalization and plasticity? For me, the main contribution is not SAM + reset, but the insight that generalizability and plasticity play separate roles in improving adaptability, and combining methods that improve each of them can lead to further performance gains; SAM + reset is just one instantiation used to verify your hypothesis. Ideally, I'd like to see that the same pattern holds on other combinations beyond SAM + reset and SAM + CRelu. - Why did you select DER and DrQ as your base learning algorithms? Have you experimented with applying your method to stronger-performing algorithms, and would the same performance gains you achieved on DER and DrQ also hold there? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors are forthcoming with the main limitations of this work, and explained them clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer 1mez, We appreciate your insightful questions and positive support. We have provided a detailed response to the comments which includes additional experiments on finding different synergetic combinations and integration of SAM + Reset on the advanced algorithms. Please let us know if you have any further comments or feedback. We will do our best to address them. > **Question 1.1:** Have you tested other combinations of methods to improve generalization and plasticity? For me, the main contribution is the insight that generalizability and plasticity play separate roles in enhancing adaptability. Thank you for highlighting our paper's key insight. To relieve the reviewer’s concern, we did investigate various method combinations. As seen in Figure A.1 and Table A.1 (attached pdf), LN(B) and SAM(B+H) consistently excel in input adaptation, while CReLU(H) and Reset(H) stand out for label adaptation. When combining these methods, we made some insightful observations in both synthetic and reinforcement learning experiments: - CReLU + Reset showed moderate improvements. - SAM + LN also yielded moderate improvements. - Yet, pairing any input adaptation method (either SAM or LN) with any label adaptation method (CReLU or Reset) consistently produced robust results. - A comprehensive combination of all these methods further enhanced performance. These findings reinforce our notion of the distinct yet complementary roles of generalizability and plasticity in enhancing adaptability. > **Question 1.2:** For synthetic experiments, uniform-randomly changing labels may not accurately simulate the moving targets problem faced in deep Q-learning. Due to the inherent complexities of learning dynamics in RL, we simplified our experiments using a supervised learning setup. We acknowledge your concerns regarding the alignment of our "Label Adaptation" scenario with RL's dynamics, especially in deep Q-learning. In Q-learning, the agent undergoes two primary label adaptation scenarios: 1) the continual change of the best action as new data is received (akin to changing labels), and 2) the alteration of target Q-values (resembling noisy labels). We recognize the importance of both scenarios but decided to focus on the former scenario to keep the synthetic experiment manageable. We concur that our current design might not entirely capture the dynamics of RL. Exploring a more realistic synthetic experiment is indeed an exciting direction for future research. We will point out such limitations in our manuscript. > **Question 1.3:** Have you experimented with applying your method to stronger-performing algorithms? We have explored the applicability of our method beyond just DER and DrQ. Specifically, our experiments encompass BBF, a state-of-the-art algorithm that learns from scratch on the Atari-100k benchmark, and SimTPR, which leverages pre-trained representations for Atari. In both instances, combining SAM and Reset consistently outperformed using either method in isolation. Detailed empirical results can be found in Table A.2 (attached pdf). We appreciate your insightful suggestion. > **Question 1.4:** Several fields appear to be missing in Table 1. Thank you for pointing out the placeholders. The updated values for DER + CReLU and DER + LayerNorm, previously marked as “0.xxx,” were provided in Supplementary Section 6, highlighted in red. We apologize for any inconvenience this caused. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. Given the robustness of the results, its novelty and broad impact, I have increased my score to an 8. I believe this work deserves to reach a wide audience. --- Reply to Comment 1.1.1: Comment: We deeply appreciate your positive evaluation and recognition of our work's potential impact. Your insights, conveyed through the review, serve as both guidance and motivation for our subsequent endeavors in this research direction. Thank you.
Rebuttal 1: Rebuttal: We sincerely appreciate all four reviewers for their constructive and insightful comments. The reviewers recognized the strengths of our paper as follows: - A fresh insight into the dissection of generalization and plasticity (Reviewer 1mez, GE2W). - The introduction of a synergistic solution: SAM + Reset, which seamlessly integrates without any architectural modifications (Reviewer dlhS, mzhA). We recognize that reviewers have highlighted several key points for us to address: - Ensuring the robustness of our synthetic experiments (Reviewer GE2W). - Delving deeper into explored synergies (Reviewer 1mez, GE2W). - Investigating the approach's applicability to advanced algorithms (Reviewer 1mez, mzhA). - Extension to various domains (Reviewer dlhS). Below, we address each of these points in the following sections. Please refer to the attached PDF for detailed figures and tables. > **Refinements to the synthetic experiments** In response to feedback, we have: - Adjusted learning rates and weight decay from a wide range of values. - Broadened our experimental baselines, including ReDO [1]. For clarity: "B" signifies the network's backbone and "H" indicates its head. Our findings spotlight the efficacy of LN(B) and SAM(B+H) for input adaptation, and the efficacy of CReLU(H) and Reset(H) for label adaptation. Moreover, the interplay between these methods has been thoroughly investigated. -Increased random seeds from 5 to 30. As illustrated in Figure A.1, we found that - Combining SAM(B+H) with LN(B), and pairing CReLU(H) with Reset(H), resulted in incremental improvements. - Merging input-focused methods like SAM(B+H) or LN(B) with label-focused counterparts, such as CReLU(H) or Reset(H), consistently delivered impressive outcomes. - Integrating all the methods together yielded the most significant enhancements. These results vividly underscore the synergy between generalization and plasticity, reinforcing our research's relevance. [1] The Dormant Neuron Phenomenon in Deep Reinforcement Learning. Sokar et al., ICML 2023 > **Deeper Exploration of Synergies** Responding to the feedback received, we've intensified our investigation into the potential combinations of techniques within both synthetic and RL settings, as described in Figure A.1 and Table A.1. Our in-depth analysis revealed that while methods tailored for input and label adaptability operate distinctively, they produce a remarkable synergy when fused. We believe this comprehensive exploration solidifies the importance of our contributions. > **Extension to Advanced Algorithms** Moving beyond the realms of DER and DrQ, we have ventured into more advanced algorithms: - BBF [2]: Recognized as a state-of-the-art algorithm for the Atari-100k. - SimTPR [3]: An algorithm that leverages pretrained representations for Atari. Table A.2 summarizes our findings: - BBF: Utilizing SSL + Reset or SAM + Reset distinctly outperforms isolated implementations. - SimTPR: The amalgamation of SAM and Reset magnifies results compared to their standalone use. Importantly, a replay ratio of 4 eclipses the results at a ratio of 2, highlighting the approach's scalability. Collectively, these experiments reiterate our method's generality and its alignment with modern algorithms. [2] Bigger, Better, Faster: Human-level Atari with human-level efficiency. Schwarzer et al., ICML 2023. [3] On the Importance of Feature Decorrelation for Unsupervised Representation Learning for RL., ICML 2023. > **Extension to DMC Generalization Benchmark** Diverging from the conventional DMC-GB-500k benchmark [4], we opted for a consistently noisy environment both for training and testing. This choice was taken to evaluate generalization techniques under persistent input non-stationarity. Here, we used 5 different environments, {walker-walk, cart pole-swing-up, finger-spin, walker-stand, ball-in-cup-catch}. Table A.3 encapsulates our findings: SAM, when paired with reset, exhibits a performance closely mirroring that of Adam. However, without the influence of a reset, SAM surpasses Adam. Contrary to our initial presumptions, we found that the DMC environment manifested a pronounced degree of input non-stationarity in contrast to Atari, even when excluding DMC-GB's intrinsic noise. This observation leads to the insight that "DMC inherently calls for methods addressing input non-stationarity." Such a realization prompted a deeper introspection into the role of reset strategy. As detailed in the original reset paper [5], DMC employs a notably aggressive reset strategy — engaging up to 90% reset for the DrQ algorithm and a comprehensive reset for SAC. Meanwhile, Atari adopts a milder reset approach, reinitializing around half of its network. DMC's rigorous reset approach serves as an effective antidote to its inherent input non-stationarity. This might explain why the benefits of SAM appear subdued when combined with such an assertive reset. Intriguingly, our evaluations on synthetic and Atari platforms revealed that Reset (B, H) strategies faltered in performance. This highlights the necessity for a more tailored approach, especially as network sizes expand. To summarize, we believe effectively managing input non-stationarity in RL is a key to improving performance for both DMC and DMC-GB. [4] Generalization in Reinforcement Learning by Soft Data Augmentation., Hansen et al., ICRA 2021 [5] The Primacy Bias in Deep Reinforcement Learning., Nikishin et al., ICML 2022 > **Conclusion** In conclusion, we deeply appreciate the constructive feedback from the reviewers. Their insights have refined our research, and we believe our revisions now thoroughly address the raised concerns. We eagerly await further feedback. Warm regards, Authors of Submission 12455. Pdf: /pdf/beaa4758ff037ff8fca2eb71440736568ba01c68.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Privacy-Preserving CNN Training with Transfer Learning
Reject
Summary: This paper combines several existing techniques to achieve privacy-preserving CNN training. These techniques include transfer learning, Quadratic Gradient, mathematical transformation, and matrix-encoding method Volley Revolver. This writing is more of a technical document rather than a research paper with insights. Strengths: 1)For the first time, they apply homomorphic encryption to neural network training. 2)They demonstrate the feasibility of homomorphic CNN training. 3)They propose pervacy-perserving friendly Squared Likelihood Error (SLE) for CNN training. 4)Experimentally, their algorithm has a state-of-the-art performance in convergence speed. Weaknesses: 1)The introduction of related works is pretty simple, which makes it difficult to evaluate the contributes of the paper. 2)The quality of writing/presentation is very weak and unreadable. Technical Quality: 4 excellent Clarity: 1 poor Questions for Authors: 1)Is the propose method suitable for other neural network, like RNN, Transformers? 2)Do you re-evaluate your method by replacing the pretrained REGNET_X_400MF to another one? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 1 poor Contribution: 3 good Limitations: See Questions and weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1: Is the propose method suitable for other neural network, like RNN, Transformers? A1: The proposed method aims for the MLR model, rather than for other neural networks. We wonder if RNN and Transformers can be reduced to MLR, with transfer learning. We would like to point out here that the new loss function (so-called SLE) fails to be used in neural networks with hidden layers, in which cases their loss functions might not be concave. **C2: Do you re-evaluate your method by replacing the pretrained REGNET_X_400MF to another one? A2: We think we evaluated our proposed method with the pre-trained REGNET_Y_400MF model and perhaps other pre-trained REGNET models. We just chose a pre-trained model from the REGNET family that has the last FC layer with a smaller number of nodes. We definitely would evaluate the proposed method on more pre-trained models other than the REGNET series. **C3: The introduction of related works is pretty simple, which makes it difficult to evaluate the contributes of the paper. A3: Yes, we did little survey about the related work part. This writing does seem like a technical document. It is too bold for us to claim this work to be the first without a full survey of the current techniques. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed explanation of my concerns. I keep my initial score. --- Reply to Comment 1.1.1: Title: Response to reviewer og3N Comment: You're welcome and thank you for your thoughtful consideration.
Summary: The paper presents a method for CNN transfer learning implemented in homomorphic encryption to protect privacy. Strengths: I'm not aware of the method being implemented in HE before. Weaknesses: I cannot judge the machine learning aspects, but I don't see a strong novelty on the cryptographic side. The paper claims that some prior work is overly complex without going into details. I find it concerning that the work relies relatively heavily on non-peer reviewed references by a single author (5 out of 19). Line 150 says "well-studied by several works" without giving any reference. Minor issues: - l6: :: - l15: .; - l50: pervacy-persevering (privacy-preserving?) - l51: diffuclt - l71: seveal - l154: After many attempts (unscholarly language) Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Line 108 mentions setting all the weight of an FC layer to zero. Wouldn't that erase all signals? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1: The paper claims that some prior work is overly complex without going into details. Line 150 says "well-studied by several works" without giving any reference. A1: Some prior work refers to the works on encoding methods for data (dataset matrices). We should have cited such works we are familiar with. **C2: I find it concerning that the work relies relatively heavily on non-peer reviewed references by a single author (5 out of 19). A2: The main techniques in this work did come from the non-peer reviewed references, such as quadratic gradient and volley revolver. **C3: Line 108 mentions setting all the weight of an FC layer to zero. Wouldn't that erase all signals? A3: We don't know if setting all the weight of an FC layer to zero would erase all signals or not, but Python simulation experiments suggest, at least for the MLR model, it works. We did know that we should not set all the weight to zero for neural networks with many layers. Perhaps, the MLR model is a very simple neural network with only 2 layers. We guess it is the lack of any hidden layer that allows the MLR to work in such a situation. --- Rebuttal Comment 1.1: Comment: A1: Maybe you could name the missing references now? --- Reply to Comment 1.1.1: Title: Response to reviewer ZXqT Comment: Kim et al. [1] introduced the technique of packing a database matrix into a single ciphertext, and similar approaches have been adopted by other researcher [4]. Jiang et al. [6] discussed the packing method for matrix multiplication. Moreover, numerous studies [3,5] have explored encoding methods for performing CNN inference in the encrypted domain. For approximating activation functions using polynomials, Kim et al.[2] employed the least-squares method and provided detailed calculations. In fact, both Python and MATLAB offer a function named "polyfit," which facilitates polynomial approximation in the least-squared sense. There are surely other valuable recent contributions in these areas that we have not covered, particularly developments within the past two years. We plan to conduct a comprehensive survey on these topics in our next submission. $\textbf{References}$ [1] Kim, Andrey, et al. "Logistic regression model training based on the approximate homomorphic encryption." BMC medical genomics 11.4 (2018): 23-31. [2] Kim, Miran, et al. "Secure logistic regression based on homomorphic encryption: Design and evaluation." JMIR medical informatics 6.2 (2018): e8805. [3] Gilad-Bachrach, Ran, et al. "Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy." International conference on machine learning. PMLR, 2016. [4] Han, Kyoohyung, et al. "Efficient logistic regression on large encrypted data." Cryptology ePrint Archive (2018). [5] Brutzkus, Alon, Ran Gilad-Bachrach, and Oren Elisha. "Low latency privacy preserving inference." International Conference on Machine Learning. PMLR, 2019. [6] Jiang, Xiaoqian, et al. "Secure outsourced matrix computation and application to neural networks." Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 2018.
Summary: In this paper, the authors proposed a CNN training technique on the homomorphic encryption domain based on transfer learning. A gradient variant called Quadratic Gradient on homomorphic encryption was proposed. And a sigmoid function-based Softmax approximation was proposed. In addition, a new loss function for squared likelihood error was proposed, and a matrix-encoding method called Volly Revolver was also proposed. Finally, they released the code they implemented. Strengths: Properly implementing functions for training on a homomorphic encryption domain is challenging. It is worth evaluating for implementing this and also disclosing their source code. Weaknesses: This paper performed a simulation on the MNIST dataset for performance evaluation. This seems too simple a dataset, even considering the homomorphic encryption environment. Although they claim that it is to be the first implementation of transfer learning-based CNN training on the homomorphic encryption domain, a similar study was recently published first. Of course, this paper takes a different approach. [*] https://openreview.net/forum?id=jJXuL3hQvt This paper is considered incomplete in several respects. The main reason is that the proposed scheme's threat model needs to be clarified. At first, "the proposed architecture" is not clear. There needs to be a description of the proposed architecture. They should explain the exact part where homomorphic encryption was actually carried out in the transfer learning process and what benefits can be gained from doing so. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the security target to achieve in this paper? It is necessary to explain the threat model that the client or server faces and what benefits the client or server can get from the proposed homomorphic scheme. For example, if the client data is encrypted, the client's privacy can be protected when the server processes the data, but from the server's point of view, it is necessary to clearly explain which portion of transfer learning should be carried out on the encrypted domain. This paper lacks a discussion on this, and it is difficult to know where data processing takes place from the paper alone. In particular, this paper said that Bootstrapping was not used. It is a very bold and interesting claim. However, they do not provide information on the entire architecture of the proposed scheme. The requirement of bootstrapping is closely related to the information on the homomorphic encryption parameters, such as the ciphertext size, the number of available slots, the number of multiplicative levels, and so on. In fact, the claim that bootstrapping is not used means that homomorphic encryption was applied only in a very small part of the entire operation in their (unknown) architecture, which raises the question of whether the method proposed in this paper is applicable in a practical application. However, judging this part is impossible because it is not explained in detail in the paper. Finally, [-8,8] is used as the average pooling range. However, it seems too small according to recent work [*]. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Not exactly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1: What is the security target to achieve in this paper? A1: HE provides a stringent level of security. To use transfer learning, the client uses a pre-trained CNN model to in advance process their data in order to obtain a new dataset and then encrypt this new dataset. This is the part where transfer learning is used. The server only needs to deal with the calculation of the MLE training over the encrypted data without knowing what exactly it is. **C2: In particular, this paper said that Bootstrapping was not used. It is a very bold and interesting claim. A2: We would like to use Bootstrapping but didn't due to the current lack of time and funds and the existence of some optimization problems. Just using the Bootstrapping function in HEAAN would consume more time. The homomorphic encryption parameters such as the number of available slots have to be further considered for time optimization. We apologize for not completing the full experiment. **C3: However, they do not provide information on the entire architecture of the proposed scheme. A3: The entire architecture of the proposed scheme is actually just the MLR, namely a 2-layer neural network without any hidden layers. For example, in this work, the input layer has 400 nodes plus one constant 1 node and the output layer has 10 nodes. **C4: Finally, [-8,8] is used as the average pooling range. However, it seems too small according to recent work [*]. A4: The range [-8, 8] is used to generate the polynomial approximate of the Sigmoid function in the output layer. The polynomial approximate developed by the method used in this work has an acceptable performance in a range larger than [-8, 8]. We appreciate the recommendation of the recent work [*].
Summary: The paper employs a few heuristic methods to accelerate logistic regression training over encrypted data. The heuristics considered include: a new loss function called squared likelihood error (SLE) along with a polynomial approximation of sigmoid function, a faster gradient-descent method based on quadratic gradient, and a matrix encoding method called volley revolver. Strengths: The only strength of the paper is that it attempts to solve a really challenging problem of learning over encrypted data and it provides the code apriori through an anonymous GitHub link. Weaknesses: 1) First and foremost, the title of the paper is misleading. The paper never deals with CNN training even in the limited context of transfer learning. It is true that most practical ML applications start with a pre-trained model and finetunes the parameters of this model. However, transfer learning implies that the whole model is finetuned apart from learning the application-specific last fully connected (FC) layer. What this paper attempts to do is just learn the last FC layer, which is nothing but multiclass logistic regression (MLR) training. Therefore, the title of the paper should not claim anything about CNN training. 2) Numerous attempts have been made over the last five years attempting to achieve MLR training on encrypted data, which have not been acknowledged in this paper and compared against. For example, see the works starting from: [A] Crawford et al., "Doing Real Work with FHE: The Case of Logistic Regression", 2018 [B] Han et al., "Logistic regression on homomorphic encrypted data at scale", AAAI 2019 [C] Bergamaschi et al., "Homomorphic Training of 30,000 Logistic Regression Models", 2019 3) This current paper appears to be very similar to the rejected NeurIPS 2022 submission entitled "Privacy-Preserving Logistic Regression Training with A Faster Gradient Variant". While the NeurIPS 2022 submission focused on only the quadratic gradient component, the current paper also introduces the SLE loss. However, it is not clear how this SLE loss function is better. Moreover, what is the expression for the gradient of $ln L_2$ and where is it used in Algorithm 1? 4) The so-called volley revolver does not constitute any novel "matrix-encoding" method. Such packing tricks are regularly used in the context of efficient SIMD operations in FHE. 5) Overall, none of the three claimed contributions (namely, quadratic gradient, SLE loss, and volley revolver) appear to be original or significant enough to make an overall impact. 6) Finally, though the paper claims that the goal is to make logistic regression training practical, not a single experimental result has been shown to prove this point. Running 2 iterations with 128 MNIST images takes approximately 21 minutes and the last line claims that real experiments would take "weeks, if not months". There are other reported works in the literature, which showed more realistic results. [D] Nandakumar et al., "Towards Deep Neural Network Training on Encrypted Data", CVPR-W 2019 [E] Lou et al., "Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data", NeurIPS 2020 Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Please see weaknesses of the paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: All the limitations have not been presented and addressed. There appears to be no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1: First and foremost, the title of the paper is misleading. Therefore, the title of the paper should not claim anything about CNN training. A1: Yes, we are worried that the title of this manuscript would be seen as deliberately eye-catching, although we think our work is practical for privacy-preserving CNN training based on mere HE. We may change its title or justify it in further submission. **C2: Numerous attempts have been made over the last five years attempting to achieve MLR training on encrypted data, which have not been acknowledged in this paper and compared against. A2: Multiple of our own techniques developed before have been used in this work and hence we don't worry to risk plagiarizing others' work. We admit that little survey has been conducted. **C3: However, it is not clear how this SLE loss function is better. Moreover, what is the expression for the gradient of $\ln L_2$ and where is it used in Algorithm 1? A3: The new loss function with the Sigmoid function is very HE-friendly and eliminates the need for the conventional Softmax function. We should not have termed this function since it fails to be used in normal neural networks with hidden layers. The gradient of $\ln L_2$ in this work is a column vector of size [c(1+d)]. In the practical Python programming, we transform the one-column gradient vector into a matrix of size $c \times (1+d)$ and then use the Numpy package to facilitate the computation. In Algorithm 1, line 23 $(Y-Z)^{\intercal} \times X$ is the matrix that stores the gradient. **C4: The so-called volley revolver does not constitute any novel "matrix-encoding" method. Such packing tricks are regularly used in the context of efficient SIMD operations in FHE. A4: The basic idea of volley revolver is to pack the transpose of either matrix for two matrices to perform multiplication. This forms a symmetrical structure between the two packed matrices, which is helpful for both the forward inference and the backward learning. Current solutions for privacy concerns based on HE usually pack matrix into a single ciphertext. We are not sure volley revolver is the first to encode the transpose of a matrix but we did realise it could be used to train neural network in the encrypted domain. **C5: There are other reported works in the literature, which showed more realistic results. A5: We appreciate the recommendations. Future changes in this work might introduce the bootstrapping operation to train the MLE model for hundreds of iterations and make some comparisons to these works. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks for the rebuttal. After reading through the other reviews and all the responses of the authors, I do not find the responses convincing enough to change my initial rating. --- Reply to Comment 1.1.1: Title: Response to reviewer F8vz Comment: You're welcome for the feedback and thank you for your thoughtful consideration.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Tight Bounds for Volumetric Spanners and Applications
Accept (poster)
Summary: This paper researches the $\ell_2$-volumetric spanner (or $\ell_2$-well-conditioned spanning subset) for a given dataset $X$ based on the local search algorithm, in which each iteration implements a single swap to update the volumetric spanner $S$. More generally, the results are extended to $\ell_p$-norm for any $p \ge 1$. Furthermore, the proposed algorithm can be applied to build coresets of size $O(d/\epsilon)$, which is independent of the number of data points $n$, for the minimum volume enclosing ellipsoid (MVEE) problem. Strengths: (1) The proposed algorithm is very easy to understand and the paper is well-written. (2) This paper presents strict proof for the proposed algorithm. (3) The coresets for the minimum volume enclosing ellipsoid problem are of size $O(d / \epsilon)$ which is independent of the number of points $n$. Weaknesses: 1. For the $\ell_2$-volumetric spanner case (1) In the paper [Hazan, Karnin, and Meka'13] (see Theorem 1.1), the $1$-approximate spanner $S$ has size $|S| = 12 d$ and the running time is $O(n^{3.5} + n^3 d + d^5)$. (2) In the paper [Woodruff and Yasuda'23] (see Theorem 3.4 and Corollary 3.5), the $(1+\epsilon)$-approximate spanner $S$ has size $|S| = O(d \log\log{d} + d / \epsilon)$ and the running time is $\widetilde{O}((\text{nnz}(A) + d^2) d / \epsilon)$, where $\text{nnz}(A) = O(nd)$. (3) In this paper, setting $\delta = O(\epsilon)$ gives the $(1+\epsilon)$-approximate spanner $S$ of size $|S| = O(d)$ and the running time $O(n d^4 \log{d} / \epsilon)$. From the above comparison, we can see that the running time is much worse than that of [Woodruff and Yasuda'23] taking $\epsilon \in (0, 1)$ as a constant. 2. This paper gives the coresets for MVEE with size $|S| = O(d / \epsilon)$ and running time $O(n d^4 \log (d/\epsilon) / \epsilon^2)$. The size of coreset is impressive, but the running time and approximation ratio $(1+\epsilon)^d$ are not satisfactory. In the paper [Cohen, Cousins, Lee, and Yang'19] (see Theorem 1.1), the $\sqrt{d}$-approximate ellipsoid can be obtained in time $O(n d^2 \log (n/d) / \epsilon)$. The coreset is a proxy of dataset $X$, but computing the coreset of $X$ in this paper takes much more time than computing the approximate ellipsoid of $X$ directly in the paper of Cohen et al. For me, this is a critical drawback. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Line 37-39, this paper claims that it improves both lines of prior work [Hazan, Karnin, and Meka'13] and [Woodruff and Yasuda'23]. What are the aspects of improvements? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > For the $\ell_2$-volumetric spanner case > (1) In the paper [Hazan, Karnin, and Meka'13] (see Theorem 1.1), the 1-approximate spanner $S$ has size $|S| = 12d$ and the running time is $O(n^{3.5} + n^3d +d^5)$. > > (2) In the paper [Woodruff and Yasuda'23] (see Theorem 3.4 and Corollary 3.5), the ($1+\epsilon$)-approximate spanner $S$ has size $|S| = O(d \log \log d + d/\epsilon)$ and the running time is $\tilde{O}( (\mathrm{nnz}(A) + d^2) d/\epsilon)$, where $\mathrm{nnz}(A) = O(nd)$. > > (3) In this paper, setting $\delta = O(\epsilon)$ gives the ($1+\epsilon$)-approximate spanner $S$ of size $|S| = O(d)$ and the running time $O(nd^4 \log d / \epsilon)$. From the above comparison, we can see that the running time is much worse than that of [Woodruff and Yasuda'23] taking $\epsilon \in (0,1)$ as a constant. For point (1), note that our algorithm (see Theorem 3.6) returns a 1-approximate spanner $S$ with size $3d$ in $O(d \log d)$ iterations (which means $O(nd^4 \log d)$ time even if we do a naive implementation – more on this below). So, both spanner size and runtime in our algorithm are better. Moreover, we don't need to appeal to the machinery on spectral sparsification, barrier potentials, etc. to obtain our result. For points (2) and (3), we first note that the greedy algorithm of [Woodruff and Yasuda 23] (henceforth [WY23]) needs to do essentially the same computation as us: they have to compute which vector to add in order to maximize the determinant (i.e., to compute a leverage score). Indeed, [WY23] refers to an earlier work of [Todd] for this step. The $\mathrm{nnz}(A)$ bound is obtained by using sketching methods to speed up the computation of leverage scores. We can use exactly the same methods for quickly computing leverage scores in our local search step. [Todd] needs to perform $d$ steps of greedy addition (each step involving computing appropriate leverage scores, which takes $\mathrm{nnz}(A)$ time), while we need to perform $d \log d$ rounds of local search. Since each round involves (a) removing each of the elements in $S$ one at a time, (b) computing which new vector to add to $S$, similar to greedy addition. So the running time of each local search step is $|S| \times \mathrm{nnz}(A) = O(d \times \mathrm{nnz}(A))$. The number of rounds is larger by a factor $\log d$ as we noted above, so the overall runtime is worse than [WY23] by a factor $d\log d$. This may be too high a price to pay for improving a $\log \log d$ factor in some applications, but we have the interesting conceptual message that local search gives a strict improvement over the best known guarantee for greedy. Also, our analysis is simple and direct, it does not require appealing to the result of Todd, and the connection to MVEE. (Indeed, in Section 4, we use the connection to MVEE in the opposite direction, obtaining a better result for MVEE.) > This paper gives the coresets for MVEE with size $|S| = O(d/\epsilon)$ and running time $O(nd^4 \log(d/\epsilon) / \epsilon^2)$. The size of coreset is impressive, but the running time and approximation ratio $(1+\epsilon)^d$ are not satisfactory. In the paper [Cohen, Cousins, Lee, and Yang'19] (see Theorem 1.1), the $\sqrt{d}$-approximate ellipsoid can be obtained in time O(nd^2 log(n/d) / \epsilon). The coreset is a proxy of dataset $X$, but computing the coreset of $X$ in this paper takes much more time than computing the approximate ellipsoid of $X$ directly in the paper of Cohen et al. For me, this is a critical drawback. The work on fast algorithms for John’s ellipsoids *measures approximation differently*. A $\sqrt{d}$-approximation in that context translates to a $(\sqrt{d})^d$-approximation to the volume (because if we scale a body by $c$, the volume in $d$ dimensional space grows by a factor of $c^d$). So in this sense, our construction is much stronger in terms of approximation ratio. [This is a natural question that readers may have, so we will add the above to the final version.] Second, as we mentioned above, using sketching methods for computing leverage scores, our running time can be brought down to $O(d^2 + \mathrm{nnz}(A))(d^2 \log d) / \epsilon$, which is about $d \log d$ worse. > Question. In Line 37-39, this paper claims that it improves both lines of prior work [Hazan, Karnin, and Meka'13] and [Woodruff and Yasuda'23]. What are the aspects of improvements? We described the improvements over prior works in our responses above. We also listed them more clearly in the common rebuttal above (see “Message to all Reviewers”). We will add this discussion to the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your clear responses! For the comparisons with some related work, for example, [Hazan, Karnin, and Meka'13] and [Woodruff and Yasuda'23], it would be more clear if you could use a table to list the size of volumetric spanner, the number of iterations, and running time, etc. In addition, I have the following concerns. (1) For $\ell_2$-volumetric spanner case, comparing with the running time of [WY'23], the running time in your paper is worse by a factor of $d \log{d}$ even though you use sketching techniques to compute leverage scores quickly. Although the proposed method is simple and does not appeal to Todd's result, these cannot convince me. (2) The core part of the proposed method is the Local Search, where the set $S$ is initialized by [Civril and Magdon-Ismail'09] and the subsequent operations are just swaps between $S$ and $\overline{S}$. What's the running time of obtaining the initial $S$? Overall, I think the contribution of this paper is kind of limited. --- Reply to Comment 1.1.1: Comment: >For the comparisons with some related work, for example, [Hazan, Karnin, and Meka'13] and [Woodruff and Yasuda'23], it would be more clear if you could use a table to list the size of volumetric spanner, the number of iterations, and running time, etc. Sure, thanks for your suggestion. We will add a more detailed description of our results and how it improves the previous work and will add a table of results too. >(1) For $\ell_2$-volumetric spanner case, comparing with the running time of [WY'23], the running time in your paper is worse by a factor of $d \log{d}$ even though you use sketching techniques to compute leverage scores quickly. Although the proposed method is simple and does not appeal to Todd's result, these cannot convince me. Note that besides being simple, our local search algorithm improves the size of $\ell_2$- volumetric spanner (aka $\ell_2$-well-conditioned subset) and shaves a factor of $\log \log d$ from the bound of [WY’23] getting an optimal bound upto a constant. We emphasize that the same local search algorithm obtains optimal size for $\ell_p$-volumetric spanners for any $p\ge 1$. Moreover, we would like to once again emphasize that our local search approach also provides “optimal” size coreset for MVEE. So, it has applications beyond the work of [WY’23]. >(2) The core part of the proposed method is the Local Search, where the set $S$ is initialized by [Civril and Magdon-Ismail'09] and the subsequent operations are just swaps between $S$ and $\overline{S}$. What's the running time of obtaining the initial $S$? Regarding the running time of the greedy initialization, it is basically the same as [WY’23]. Note that since in local search we need to remove one vector and then greedily add the best one from the remaining vectors, the runtime of local search is a factor of d worse than the greedy initialization per each round and as the local search terminates in $O(d\log d)$ and the greedy algorithm terminates in $O(d)$ rounds, overall the greedy initialization part is faster than the local update part by a factor of $O(d \log d)$ and does not affect the total runtime asymptotically.
Summary: This paper studies the problem of constructing small volumetric spanners. Given a set S of points in R^d, (the l2 version of) the problem is to find a subset of S so that every point in S can be written as a linear combination of the subset points, with the l2 norm of the coefficients bounded above by 1. Analogous problems can be defined for other lp norms. There is a simple and efficient greedy algorithm that by folklore results gets a spanner of size O(d log d); there is an improvement due to Todd that gets O(d log log d), and Hazan, Karnin, and Meka get 12d via methods involving the John ellipsoid and spectral sparsification. The main result of this paper is that a slight modification of the greedy algorithm actually computes a spanner of size 3d. Strengths: This paper gives a simpler and (slightly) better algorithm for computing an l2 volumetric spanner: specifically, a natural greedy approach works. It also gives simple extensions to other lp norms, and matching lower bounds. Weaknesses: The improvement in the spanner size is modest compared to prior work, and the algorithm is quite similar to known approaches (the only difference is that instead of iteratively adding elements to the spanner, each time we add a new element we also remove one), as is the analysis. I would not be entirely sure that this algorithm/analysis is not already present in the literature. However, since I could not find a reference I would err on the side of acceptance. It would have been good to explicitly mention the bound obtained by the prior work (Hazan, Karnin, and Meka 2013). Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >The improvement in the spanner size is modest compared to prior work, and the algorithm is quite similar to known approaches (the only difference is that instead of iteratively adding elements to the spanner, each time we add a new element we also remove one), as is the analysis. I would not be entirely sure that this algorithm/analysis is not already present in the literature. However, since I could not find a reference I would err on the side of acceptance. >It would have been good to explicitly mention the bound obtained by the prior work (Hazan, Karnin, and Meka 2013). Please see the “message to all reviewers” above (common rebuttal). As for the algorithm being known, it indeed is, as it is simply local search! But the key aspect of local search is the choice of the objective function. In this case, we use an appropriate determinant, and we have to argue that the output of local search with this objective is a well conditioned basis. We also note that the general $\ell_p$ case has interesting behavior which was not known before, and moreover, the exact same algorithm works for all $p$! (Earlier, the cases $p=2$ and $p=\infty$ were handled by different methods). --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response. I concur with the other reviewers about including a comparison table with the prior work, and I retain my score.
Summary: This paper looks at the problem of identifying volumetric spanners from a set of points \in \RR^d. The paper shows a method of local search which can be extended to find near optimal bounds of volumetric spanners, improving on previous algorithms. The authors also apply it to an application called MVEE, where they provide better rates for finding coresets of MVEE than previously in the literature. The authors start with the generalization of volumetric spanners to the c-approximate use-case and analyze their local search method to show that for an L2 volumetric spanner of size 3d, the local search would only take O(d log d) time. They also extend their results for general l_p cases for p = 1, p \in (1,2) and p>2, the result holding trivially for the latter. They analyze the coresets of MVEE problem to show that they can obtain coresets of size O(d/\epsilon) using local search method, which is an improvement over previous methods which have O(d/\epsilon^2) bounds. Strengths: I have not completely checked the proofs, but the results are strong if the proofs hold. Particularly important is their usage in the spanning subset setting/usecase, which is a frequent need in today's age of very large sized datasets and having a small sized subset spanning the majority of the data points can open doors to a lot of efficient analysis with higher dimensional data, as well as help us perform matrix vector operations on such datasets more efficiently. The problem is very easy to describe and motivate and the algorithm is easy to follow. Weaknesses: The comparison of results is quite difficult to follow from the paper. In particular, it could be pretty easily explained by having a table or having a clear description of the best possible previous result and how much it was improved by through the local search method. I am also not clear about the degree of optimality, which can be more clearly stated in the description itself. It would also be useful to have a description of the degree of optimality of the previous attempts to solve this problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please look at the weakness section. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >The comparison of results is quite difficult to follow from the paper. In particular, it could be pretty easily explained by having a table or having a clear description of the best possible previous result and how much it was improved by through the local search method. I am also not clear about the degree of optimality, which can be more clearly stated in the description itself. It would also be useful to have a description of the degree of optimality of the previous attempts to solve this problem. We thank the reviewer for this comment. We addressed this in the “message to all reviewers” above (common rebuttal), and we will add the discussion to the final version of the paper. --- Rebuttal Comment 1.1: Comment: I have gone through reply by the authors and the description of the improvements over previous work and I would like to retain my score.
Summary: This paper develops and analyzes a local-search algorithm for finding volumetric spanners under norms in the regime where the number of given vectors $n$ is at least as high as the dimension $d$ of these vectors. Moreover, a runtime of the algorithm is given and the size of the algorithm's output is compared to existing lower bounds. Finally, this paper presents applications of the algorithm to the minimum volume enclosing ellipsoid (MVEE) problem (in the paper's body) and other lower rank problems (in the appendices). Strengths: The main novelty of the paper is the development of a (relatively) simple algorithm that obtains near-optimal results under different norms. A minor novelty is the improvement in the $\ell_2$ case where the existing upper bound for computing well-conditioned coresets was improved from $O(d\log\log d)$ to $O(d)$. A particular point of originality is the application of the proposed algorithm to generate a concrete solution to $\ell_p$ subspace embedding problem, in contrast to existing non-constructive solutions. Weaknesses: This paper suffers from three major weaknesses: (i) unfocused comparisons of the paper's results to existing literature, (ii) technical issues with the mathematical results, and (iii) the lack of important details in several parts of the paper. Below, I outline some specific instances: (i.1) [Woodruff and Yasuda, 2023] - How does the notion of “distortion” in Definition 1.2 of [Woodruff and Yasuda, 2023] factor into the analysis of this paper. For example, do the coreset generated by Algorithm have high distortion when applied to the well-conditioned $\ell_2$ coreset problem? (i.1) Runtime and Solution Size - It is difficult for the reader to find how the proposed algorithm compares to existing literature, as comments are generally scattered throughout the paper. A table that compares runtime and solution size would help in improving this issue. (ii.1) Proof of Thm. 4.2 - Assuming that indeed $OPT_X \leq (1+\epsilon)d + OPT_T$, the only conclusion is that $vol(MVEE(X)) \leq \exp((1+\epsilon)d) \times vol(MVEE(S))$, which does not coincide with the definition at the beginning of Subsection 2.1, as claimed. (ii.2) Lemma 3.4 - The analysis seems to break down when $r\leq d+1$, but $r$ does not appear to be restricted in the algorithm (indeed, the “Initialization” discussion following Algorithm 1 seems to suggest that $r=d, d+1$ are valid cases. (ii.3) Section 3.2 - How do we know that $M$ remains invertible throughout the entire runtime of the algorithm. None of the results (including Lemma 3.3) seem to suggest that $\det M \neq 0$. (iii.1) Algorithm 1 - What is the significance of the parameter $r$? Can I set it to $d$ to be optimal? (iii.2) Algorithm 1 - A description of the initialization subroutine and its properties is missing. This is important for the reader verify that the presented algorithm is valid, e.g., $M$ starts with full rank. (iii.3) Proof of Thm 4.2 - Why is $\sum n_i = r$? (iii.4) Proof of Thm 4.2 - Why is $H^{-1} = M / r$? Specifically, why is $n_i = 1$ for all $i$ (which is needed for this identity to hold in general). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Aside from the issues and questions pointed out in the “Weaknesses” section, I have a few minor suggestions/questions below: (1) Please use either $u_i$ or $\lambda_i$ for the dual variables in the proof of Theorem, but do not use both. (2) Please make the use of parameter $r$ somewhere in Algorithm 1. At first glance, it appears to be an unused parameter, e.g., setting $r=d$ and $r=1000d$ has no effect on the result of the algorithm. (3) In equation (4), $r$ can be fractional. This doesn't seem to make sense in the context of the “Initialization” discussion after Algorithm 1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have sufficiently addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >(i.1) [Woodruff and Yasuda, 2023] - How does the notion of “distortion” in Definition 1.2 of [Woodruff and Yasuda, 2023] factor into the analysis of this paper. For example, do the coreset generated by Algorithm have high distortion when applied to the well-conditioned $\ell_2$ coreset problem? The notion of distortion in [WY23] is for low rank approximation, it is not related to volumetric spanner / well-conditioned basis, which is the subject of our paper. So in short, the question of distortion does not arise. >(i.1) Runtime and Solution Size - It is difficult for the reader to find how the proposed algorithm compares to existing literature, as comments are generally scattered throughout the paper. A table that compares runtime and solution size would help in improving this issue. Thanks for the suggestion, we will add a clear discussion in the final version. Please see our response to Reviewer 5 (4Fat) as well. >(ii.1) Proof of Thm. 4.2 - Assuming that indeed $OPT_X \le (1+\epsilon)d + OPT_T$, the only conclusion is that $vol(\mathrm{MVEE}(X)) \le \exp((1+\epsilon)d) \times vol(\mathrm{MVEE}(S))$, which does not coincide with the definition at the beginning of Subsection 2.1, as claimed. This is exactly the definition at the beginning of 2.1 – see the equation there. It is not a “strong coreset”, as we also discuss in Section 2.1. >(ii.2) Lemma 3.4 - The analysis seems to break down when $r \le d+1$, but $r$ does not appear to be restricted in the algorithm (indeed, the “Initialization” discussion following Algorithm 1 seems to suggest that $r = d, d+1$ are valid cases.) Yes, $r$ can be any value larger than or equal to d. Note that when $r = d, d+1$, the approximation guarantee is $\Omega(d)$. So, as mentioned in Theorem 3.6, the more interesting values of r are larger, e.g. $r = 3d$ which results in $1$-approximation. Perhaps the reviewer is referring to the denominator being $r-d+1$. This becomes zero when $r = d-1$, not $d+1$. (If we misunderstood, please refer us to the point in the analysis you think it breaks.) >(ii.3) Section 3.2 - How do we know that $M$ remains invertible throughout the entire runtime of the algorithm. None of the results (including Lemma 3.3) seem to suggest that $\det M \neq 0$. Note that the determinant of $M$ does not decrease in the local search update (see condition in line 5). It remains to show that the selected M in the initialization step is invertible. This easily follows from (a) the greedy algorithm is maximizing volume (which is proportional to determinant) and (b) the full set of vectors span a d dimensional space (which was an assumption we can make without loss of generality, as we can always work with the span of the vectors). For the sake of clarity, we will add this discussion to the “Preliminaries and Notation” section of the paper. >(iii.1) Algorithm 1 - What is the significance of the parameter $r$? Can I set it to d to be optimal? As also mentioned in (ii.2), one can set $r$ to any value $\ge d$. However, there is a trade-off between the approximation factor and $r$. When $r = d$ (or $d + o(1)$), the approximation guarantee is $\Omega(d)$. Also as mentioned in Theorem 3.6, setting $r = 3d$, the approximation factor becomes one. >(iii.2) Algorithm 1 - A description of the initialization subroutine and its properties is missing. This is important for the reader to verify that the presented algorithm is valid, e.g., $M$ starts with full rank. In Algorithm 1, it is mentioned that the pre-processing is described in the text. $M$ is a full rank matrix by the fact that we are maximizing the volume. >(iii.3) Proof of Thm 4.2 - Why is $\sum n_i = r$? $n_i$ denotes the number of times index $i$ appears in $S$ (and size of $S$ as a multiset is $r$). So by definition, $\sum n_i = r$. >(iii.4) Proof of Thm 4.2 - Why is $H^{-1} = M / r$? Specifically, why is $n_i = 1$ for all $i$ (which is needed for this identity to hold in general). By the definition of $H$, $H^{-1} = \sum_i \lambda_i v_i v_i^T = \sum_i (n_i / r) v_i v_i^T = (1/r) \sum_i n_i v_i v_i^T = M$. In all of this, nowhere is it required that $n_i = 1$. What *is* needed is the invertibility of $H$. This is discussed in the answer to (ii.3) above. >(1). Please use either $u_i$ or $\lambda_i$ for the dual variables in the proof of Theorem, but do not use both. Thanks, we will fix this inconsistency. >(2). Please make the use of parameter r somewhere in Algorithm 1. At first glance, it appears to be an unused parameter, e.g., setting $r = d$ and $r = 1000d$ has no effect on the result of the algorithm. No, $r$ is used in the Initialization step of the algorithm. The set $S$ is always of size $r$ – so it is indeed very crucial. We will add a note to make this more explicit. And of course, in Theorem 3.6, both approximation guarantee and runtime are functions of $r$ and $d$. So, $r$ plays a very important role in the final guarantee of the constructed volumetric spanner. >(3). In equation (4), $r$ can be fractional. This doesn't seem to make sense in the context of the “Initialization” discussion after Algorithm 1. We will fix the typo and set it to be the ceiling of $(1 + 4/\epsilon) d$ --- Rebuttal Comment 1.1: Title: On (ii.1), (ii.2), (2) Comment: **On (ii.1)** > This is exactly the definition at the beginning of 2.1 – see the equation there. It is not a “strong coreset”, as we also discuss in Section 2.1. No, it is not (note the $\exp(\cdot)$ term in my version). For a more concrete example, consider the case of $\epsilon = 0$ and $d=1$. The definition near the beginning of Subsection 2.1 yields $vol(MVEE(X))\leq vol(MVEE(S))$, but Thm. 4.2 only implies $vol(MVEE(X))\leq e \cdot vol(MVEE(S)) \approx 2.72 \cdot vol(MVEE(S))$. That is, your algorithm is provably worse than any $e$-approximation algorithm. **On (ii.2)** Yes, you are correct. I meant $r \leq d-1$. Please add a precondition for $r$ in Lemma 3.4 to make this clear. **On (2)** Make sure to do this inside the algorithm as well, to make it more self-contained. --- Reply to Comment 1.1.1: Title: clarification of typo Comment: Ah, sorry about the confusion. This is caused by a typo: the term $(1+\epsilon)d$ should actually be $d \ln (1+\epsilon)$. We argue that $H/(1+\epsilon)$ is a feasible solution (line 311 in the submission). Plugging it into the objective, we have $OPT_X \le -\ln det(H/(1+\epsilon)) = d \ln(1+\epsilon) - \ln det (H)$. (This is because $det(H/c) = det(H)/c^d$ for a $d$-dimensional $H$, and any constant $c$.) We will correct this in the final version, and thank the reviewer for carefully checking.
Rebuttal 1: Rebuttal: **Message to all Reviewers.** We thank all reviewers for their careful and constructive comments. Here we are addressing the question regarding the application of our result to machine learning and the improvement of our work over prior works. Then, we will answer all other questions in the individual responses. Please let us know if you have any further questions/comments. **Applications to Machine Learning.** (The following are mentioned briefly in the introduction, but we can expand on them in the final version.) Well conditioned bases were introduced and studied by Awerbuch and Kleinberg (2008) and Hazan et al. (2013), as a good “exploration basis” for bandit algorithms on convex domains. Bandit optimization is a fundamental and well-studied problem in ML –which itself has many applications– and our results give improvements over these prior work (explained below). The second main application is matrix low-rank approximation. The problem of finding a small (in cardinality) subset of columns whose combinations can be used to approximate all the other columns of a matrix is called “column subset selection”. It has been used for matrix sketching, “interpretable” low rank approximation, and has applications to streaming algorithms as well. The classic works of Frieze and Kannan 1997; Drineas, Mahoney and Muthukrishnan 2006; Boutsidis et al. (cited in the paper) all study this question. Well conditioned bases are closely related to column subset selection, and indeed, the works of Boutsidis et al. exploit this connection. The recent paper of Woodruff and Yasuda (STOC 2023) expands on this, using well conditioned bases for a host of matrix approximation problems. Third, well conditioned bases are closely related to determinantal point processes (DPP) and diversity maximization, also well studied in the ML literature. Indeed, our techniques are derived from work in this space. Finally, as we point out in the paper, we obtain coresets for the classic problem of min volume enclosing ellipsoids (MVEE). Coresets are an object of extensive study in ML, used to obtain a “representative” sample of a given dataset. Many works have tried to construct “optimal” sized coresets for various data analysis problems, like clustering and low rank approximation. We do this for MVEE, obtaining coresets of size linear in the dimension. This is conceptually interesting, because it matches the coreset size for a much simpler object – the axis-parallel bounding box (which always has a coreset of size $\le 2d$, obtained by taking the two extreme points along each axis). **Improvement over prior work, significance of the results.** We will highlight the following discussion in the final version. First off, we note that a unified treatment of the $\ell_p$ well-conditioned bases (for general $p$), along with matching lower bounds has not been done in any of the prior work. Existing results treated the cases $p=2$ and $p=\infty$, using different techniques. For $p=2$, there are two main prior works. The first is the work of Hazan et al. 2013. They obtain a linear sized basis, similar to our result. However, their result is weaker in terms of running time (by a factor roughly $d^2$), as well as the constants in the size of the basis. Moreover, our algorithm is much simpler, it is simply local search with an appropriate objective, while theirs involves running a spectral sparsification subroutine (that involves keeping track of barrier potentials, etc.) followed by a rounding step. The second work is the recent result of Woodruff and Yasuda (2023). Here the algorithm is simple – basically a greedy addition (and thus the running time is comparable to ours, up to a logarithmic factor). However, the authors incur an additional $\log \log d$ factor, due to the analysis of the greedy algorithm (which itself was from a prior work of Todd 2016). Removing this factor is interesting conceptually, as it shows that local search with an appropriate objective can achieve something stronger than the best known result for greedy.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers the problem of constructing a volumetric spanner: a subset of vectors which allows to represent every other vector via a linear combination with coefficients whose lp-norm is small. The algorithms are based on local search and work for any lp-norm for p>=1. As a representative result (Theorem 3.6) one can find for a set X of size n>=d a subset of size at most 3d which can be used to represent all vectors in X via linear combinations with l2-norm of the coefficients <= 1. This requires O(d log d) iterations of local search. For l1-norm an exponential lower bound is given (Theorem 3.8). The result for p>2 follows trivially from Theorem 3.6 and for 1<p<2 a construction with certain parameters is given. Strengths: The paper has clear results and is well-written. I think I understand all the main results and techniques quite well. Weaknesses: The applications to machine learning problems are somewhat weak (a few examples are given in the intro to column subset selection and sparse coding, but I am not exactly sure what the implications of the results in this paper are for the applications). The claim that MVEE is an application is that we get an improvement on the size of the best known coreset construction. What is the magnitude of this improvement? Does this yield an algorithmic speedup? There is an application mentioned to [Woodruff, Yasuda’23] where a log log d factor can be shaved. Can you elaborate on the implications of this as well? The discussion in Appendix A.1 is hard to follow. Can you restate the WY’23 result and explain what the contribution here is? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: – Line 17: sparse coding or problem? – See a question regarding applications below. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The applications to machine learning problems are somewhat weak (a few examples are given in the intro to column subset selection and sparse coding, but I am not exactly sure what the implications of the results in this paper are for the applications). Please see our above message to all reviewers for discussion on applications of our results to machine learning. >The claim that MVEE is an application is that we get an improvement on the size of the best known coreset construction. What is the magnitude of this improvement? Does this yield an algorithmic speedup? As mentioned in above (message to all reviewers), finding the “optimal” coreset size for MVEE has been studied in several previous works (including Kumar and Yildirim 2007, and Todd 2016). We improve the size from $O(d \log \log d)$ to $O(d)$, using an algorithm of similar complexity. Numerically, this is not much, but it shows that (a) finding min enclosing ellipsoids admits coresets of linear size, similar to much simpler problems, like finding an axis parallel bounding box, and (b) local search offers an improvement over greedy coordinate ascent. >There is an application mentioned to [Woodruff, Yasuda’23] where a log log d factor can be shaved. Can you elaborate on the implications of this as well? The discussion in Appendix A.1 is hard to follow. Can you restate the WY’23 result and explain what the contribution here is? [WY23] gives many “black box” applications of well conditioned spanning sets, including entrywise Huber error low rank approximation, oblivious subspace embeddings, etc. In all these works, we can instead use our construction; this gives an improvement by a factor $\log \log d$, and in many cases, gives the tight bounds. >Q. – Line 17: sparse coding or problem? – See a question regarding applications below. Thanks, we will fix the typo, it should be “sparse coding problem”. This is only an example where choosing the basis is crucial, it is not directly relevant for us.
null
null
null
null
null
null
Tight Risk Bounds for Gradient Descent on Separable Data
Accept (spotlight)
Summary: This paper considers training a convex linear model with gradient descent on linearly separable data. The paper improves the upper bound of the population risk and proves a matching lower bound. Strengths: The paper closes the gap for training convex linear models with smooth loss functions on linearly separable data, by showing matching upper and lower risk bounds. I believe this can be an important work in this field. Also, the paper is well-written and the logic is easy to follow. The proof is a bit heavy but the authors managed to make it accessible. Weaknesses: I didn't find a major flaw in this paper. The proof looks correct to me. My only concern is that there seems to have little novelty in techniques as a theoretical work. Especially the improvement on the upper bound of the test risk relies on the results in Srebro et al. [15]. But I believe the proof of the lower bound has some nice intuition, which probably should be further explained. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. For the lower bound in Theorem 1, is there a way you can derive the minimum of the RHS with respect to the epsilon? 2. In the proof of Theorem 2, you consider two different regimes where $T \gg n$ and $n \gg T$. How to see that in Lemma 3 and 4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: As a theory paper, I think it's fine to consider linear models. Therefore, I don't see an important limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We respond in length to all of your questions below; in case there are any remaining concerns, we will be glad to clarify them during the discussion period. > Re novelty of techniques In terms of upper bounds, we believe that simplicity is an advantage of our proof approach: get stronger (and nearly tight) upper bounds using standard techniques in optimization and online learning, that improve upon more involved proof techniques used in previous works. We think this is significant as it puts the setting of unregularized & separable linear classification back within a general and common framework, instead of analyzing it with techniques and assumptions that are tailored to the specifics of the problem. That said, we also emphasize that, to us, the main technical novelty of the paper is actually in its lower bound constructions, that are new to the literature on unregularized & separable linear classification. > “The proof of the lower bound has some nice intuition, which probably should be further explained” Agreed - we will work on including further explanations about the intuition for our final version. Thanks for this suggestion! > “In theorem 1, is there a way you can derive the minimum of the RHS with respect to the epsilon?” Good question - for every tail function $\phi$, the $\epsilon$ which optimizes the bound is the one that satisfies the equality $(\phi^{-1}(\epsilon)^2)/\gamma^2T) = \epsilon$. That is, the optimal $\epsilon$ for a given specific class is specified in a general yet somewhat implicit way, but at the same time, can be easily computed in a case-by-case fashion. In Appendix A we show how to derive the bounds that Theorem 1 implies after choosing the best $\epsilon$ for a handful of loss classes. > “In the proof of Theorem 2, you consider two different regimes where T>>n, N>>T. How to see that in Lemma 3 and 4?” Lemmas 3 and 4 are based on two different hard instances of the problem. For the instance defined in Lemma 3, we get a lower bound of $\phi^{-1}(128\epsilon)^2 / \gamma^2n$, and for the instance defined in Lemma 4, we get a lower bound of $\phi^{-1}(8\epsilon)^2 / \gamma^2T$. We only consider the relationship between $T$ and $N$ in Theorem 2, where for any regime of parameters we pick the relevant instance that establishes the lower bound (in other words, each of these lemmas is relevant for one of these regimes). --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for addressing my questions. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful response! We'll revise our paper based on the constructive feedback from the reviews.
Summary: This paper examines the problem of learning linearly separable data with margin using gradient descent (GD) and focuses on establishing a population risk bound on the GD output. Previous research in this area has primarily concentrated on understanding the implicit bias and population error associated with solving this task using logistic regression. In contrast, this study introduces a broader class of loss functions known as C_{\pi,\beta}, which are non-negative smooth loss functions that converge to zero more rapidly than a reference loss function, \pi. The main contribution of this paper is the derivation of both upper and lower bounds on the population risk of GD (including stochastic gradient descent, SGD) for this class of loss functions. The authors employ a straightforward and elegant proof technique that revolves around controlling the norm of the GD output and its empirical risk. In order to establish a population error guarantee, the authors rely on uniform convergence results specifically tailored for linear models. Strengths: This paper is extremely well-written and easy to follow. Also, this paper shows that bounds on the empirical error of (S)GD and the norm of the solution suffice to provide a sharp analysis of the population error. Compared to other work in this line, proofs in this work are more straightforward. Also, the population risk guarantee in this paper can be seen as an algorithmic-dependent uniform convergence which is also interesting. Weaknesses: In classification tasks, the ultimate goal is often to minimize the 0-1 loss function, which directly measures the accuracy of classification. However, when employing gradient descent (GD) optimization, it is common to use surrogate loss functions such as logistic loss or hinge loss. These surrogate loss functions approximate the 0-1 loss and are more amenable to optimization. In this paper, all the results focus on controlling the population risk of the surrogate loss functions. It is important to note that many surrogate loss functions can provide an upper bound on the 0-1 loss function. However, for the specific case of logistic loss, there appears to be a discrepancy between the upper bound on the logistic loss presented in this paper and the upper bound on the 0-1 loss. This discrepancy suggests that overfitting may occur, and the concept of early stopping becomes relevant. Something that I fail to understand is why the authors do not focus on providing classification error bound for C_{pi,beta}. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1- In Corollary 1, the authors instantiate their result for the logistic regression. It shows that when T is exponentially large in the number of samples, then we have overfitting. How does this result compare to Shamir 2021? 2- Can the lower bound be extended to the case that it holds for every phi and “every” loss in C_{\pi,beta}? The current statement is different. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: One of the limitations is that this paper does not provide any evidence of "implicit bias". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We respond in length to all of your questions below; in case there are any remaining concerns, we will be glad to clarify them during the discussion period. > “Why the authors do not focus on providing classification error bound for $C_{\phi,\beta}$” As you mentioned in your review, our primary focus in the paper is on the population risk rather than directly on the classification error, following the recent literature in this context (e.g., Schliserman and Koren (2022), and Telgarsky (2022)). As we discuss in our introduction (and Schliserman and Koren (2022), and Telgarsky (2022) discuss more extensively), there are several reasons to study the loss/risk rather than the error directly; for example, it can lead to several interesting conclusions about the significance of early stopping. That said, our risk upper bound for the class of losses $C_{\phi,\beta}$ directly implies a similar bound on the (test) classification error, for all losses in this class that also upper bound the 0-1 loss (namely, whose value at zero is one). > 1. “In Corollary 1, the authors instantiate their result for the logistic regression. It shows that when $T$ is exponentially large in the number of samples, then we have overfitting. How does this result compare to Shamir 2021?” Shamir (2021) gives an upper bound for the test 0-1 error, while we prove a lower bound for the test **loss (risk)**. There is no contradiction between the two results: when running GD on the logistic loss, it is not overfitting with respect to the 0-1 error, however, it may overfit with respect to the risk. This discrepancy between the risk and the 0-1 error is actually a central aspect in our setting (see discussion in lines 83-91). > “Can the lower bound be extended to the case that it holds for every $\phi$ and “every” loss in $C_{\phi,\beta}?$“ That’s a good point - the bound cannot unfortunately hold simultaneously for all losses $C_{\phi,\beta}$. Note that $C_{\phi,\beta}$ contains all loss functions that decay to zero faster than $\phi$, and therefore contains losses that decay strictly faster than $\phi$, say faster than some $\tilde\phi \ll \phi$. For such losses, we can obtain a better upper bound from Theorem 1 (the one for the class $C_{\tilde\phi,\beta}$), which means that the lower bound for $C_{\phi,\beta}$ cannot hold for them. This is often the case with lower bounds, of course: one demonstrates a hard instance of the problem within a **class of problems**, in order to conclude that stronger upper bounds for the same class of problems do not exist. > “This paper does not provide any evidence of ‘implicit bias’” In fact, our results do point to some form of “implicit bias”: the “implicit bias” in our setting holds in the form of bias to models with a small norm. Indeed, we show that despite working in an unregularized and unconstrained regime, the effective set of possible models GD might find is relatively small (with norm bounded by a quantity depending on the tail decay rate). Then, we can use uniform convergence arguments on this set of possible solutions to obtain generalization bounds. --- Rebuttal Comment 1.1: Title: after rebuttals. Comment: Thanks for addressing my questions! I raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your kind reply. We will revise our paper according to the constructive comments in the reviews.
Summary: This paper studies the generalization properties of gradient methods for convex, smooth and decreasing losses (e.g., logistic and polynomial losses) over linearly separable data distributions. The contributions are 1) a generalization bound based on Rademacher complexity which does not require the self-boundedness assumption for the loss and 2) a lower bound on the test loss which nearly matches the upper-bound. Overall, the results indicate that Rademacher complexity (or algorithmic stability) analyses for data-dependent generalization of GD are rate-optimal, as the lower bounds match the upper bounds in $n,T,\gamma$. Strengths: The paper is in general well-written and clear, although few inconsistencies exist throughout the main body. The studied problem is interesting as understanding generalization of GD on separable data is foundational to machine learning. This paper takes a step in this direction by studying linear models. The paper's results imply that the commonly studied margin-based generalization bounds which usually take the form $1/(\gamma^2 n)$ cannot be improved in $\gamma,n$, or other parameters related to tail behavior, unless additional assumptions on data are imposed. The paper's claims are new and are also applicable to a broad class of smooth losses. The main contribution of the paper which is the lower test loss bound for a certain class of convex loss functions involves the construction of a new data distribution which can be useful for future works. Weaknesses: weaknesses and some questions: 1- The derived upper-bounds are barely an improvement over known results in literature. The authors claim in abstract and also throughout the introduction that their results apply to any smooth loss function. however this is not the case, as the loss in this work must be monotonically decreasing. This excludes the commonly used square-loss whereas previous results by Lei and Ying 2020 can be applied to square loss. I also do not find the condition of self-boundedness existing in previous works limiting as it includes almost all commonly used losses that are also used in this work. Can the authors elaborate on that? 2- Also regarding the training and generalization upper bounds, I am not convinced of the novelty of the approach. The technique based on Srebro et al. is rather simple and well-known. I assume the challenge is in characterizing the rademacher complexity for linearly separable data, but similar analyses have been already done even for more complex models such as neural networks, for example see the papers titled "Feature selection and low test error in shallow low-rotation ReLU networks" and "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks". Can the authors please comment on the challenges in optimization and generalization analyses and highlight novel steps related to general tail behaviors? 3- How do the bounds look like if $w_1\neq 0$? Particularly, it is interesting to know how the bounds on generalization behave with respect to initialization. 4- It might be helpful if a new column is added to table 2 explaining what $r_{\phi,T}$ looks like for each loss. 5- The specific data distribution used for obtaining the lower bound seems very interesting. It can be helpful for readers if the authors provide more discussion on high-level steps of the proof and the their intuition on the designed distribution. 6- It seems the bound of Theorem 1 contains a $\log ^3(n)$ factor which is missing In Table 2. 7- Table 2 shows that tailes of the form $\exp(x^{-\alpha})$ are slightly better in test loss performance than exponential tails. Does it leads to faster convergence for misclassification test error or generally for real-world experiments? 8- I think the previous works on the lower generalization bounds of SVM can be discussed here. e.g., "Optimality of SVM: novel proofs and tighter bounds" by S Hanneke, A Kontorovich. Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see the section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not have a conclusions section. It can be insightful if the authors discuss some limitations of the methodology and data assumptions. There is no potential negative societal impact associated with this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We respond in length to all of your questions below; in case there are any remaining concerns, we will be glad to clarify them during the discussion period. > 1. Re improvement over known results in literature First, note that monotonicity of the loss function is not a limiting condition, as any reasonable classification loss function should be monotone in its activation. This is in contrast to regression losses, like the squared loss, which are indeed not monotone (as they penalize two-sided errors). Regarding self-boundedness, the removal of this condition helps us to remove log(T) factors for the super-exponential tail losses, compared to Schliserman and Koren (2022). Another assumption that we remove is the Lipschitz assumption, thus, we get a tight bound also for non-Lipschitz functions (e.g., the probit loss). The removal of those two conditions is also important from a theoretical perspective. The main contribution of the paper is the matching lower bound that we establish for the class of smooth functions with a given tail decay rate (what we call $C_{\phi,\beta}$), and, there is significance in removing extraneous assumptions so that the upper and lower bounds end up matching not only in their rates but also in the set of assumptions they rely on. > 2. Re novelty of the upper bounds approach The approach we assume for our upper bound is indeed mostly simple and straightforward - we actually state and discuss this explicitly in the paper (see lines 67-77). As we explain there, we see this as a strength since previous work in the context of classification with separable data obtained weaker results using more involved proof techniques. That said, the main step in our approach is not in characterizing the rademacher complexity — which is indeed very much standard — but rather in showing that, for a very general class of loss functions, the GD iterates remain bounded within a ball of small (tail-decay dependent) radius around initialization, and using rademacher-based uniform convergence on that ball. Another step is using the same radius for characterizing the convergence rate of GD on the empirical risk. For both of these steps, it is not a priori clear why GD should converge quickly and generalize well despite being executed unconstrained on an unregularized objective, as is the case in our setting, and indeed, this was not the approach taken by much of the previous work in this domain. Please see lines 67-77 in the paper discussing this proof approach. In light of your comments, we will try to improve this discussion in our final version. Thanks! > 3. “How do the bounds look like if $w_1 \neq 0$?” As we explain in the paper, our upper bound is implied by two properties of the optimization algorithm - the norm of the iterates and the optimization error. When $w_1\neq 0$, the norm of the iterate can be increased by at most $\|w_1\|$ compared the current analysis. Moreover, the optimization error can be bounded by $O(\|w_{\epsilon}^*-w_1\|^2/\gamma^2T + 2\epsilon)$. Combining those two bounds with Proposition 1, we can obtain a test risk bound with an additional additive term that depends on $\|w_1\|^2$. We emphasize that this method is general and our bounds can be applicable to any non regularized optimization algorithm with low iterate norm and low optimization error. > 4. “new column is added to table 2 explaining what $r_{\phi,T}$ looks like” Good idea, thanks - we will revise the table as you suggest in the final version. > 5. “It can be helpful the authors provide more discussion on high level steps of the proof and the intuition on the designed distribution” Thanks for this suggestion - we will provide in the final version a more intuitive proof sketch and more high-level explanations around the construction of our lower bound instance. > 6. “$log^3(n)$ factor is missing In Table 2” Correct - in our tables we suppressed logarithmic factors other than $\log(T)$ factors (e.g., the $log(1/\delta)$ factor is also suppressed). We will make this clear in the revision. Thanks. >7. “Are tails of the form $exp⁡(x^{-\alpha})$ better in test loss performance than exponential tails for real-world experiments? This is a very good question - we are actually not aware of empirical studies that focused on this expected behavior. > 8. “Previous works on the lower generalization bounds of SVM can be discussed” Thanks for this suggestion! We will look into these papers and cite them accordingly in the final version. It is also worth noting that the primary focus of our work in on upper bounds and lower bounds for the (population) **risk** of GD, rather than its 0-1 classification error; thus, our lower bounds are inherently different in nature than lower bounds for the 0-1 error (e.g., the ones in "Optimality of SVM: novel proofs and tighter bounds"). --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind reply. We will revise our paper according to the constructive comments in the reviews.
Summary: This paper establishes tight upper and lower bounds on the population risk of linear models trained with gradient descent (GD) on linearly separable data. In contrast to previous work, the paper's result only requires a smoothness assumption on the loss function, and the upper bounds are adaptive to the loss function's tail decay. Surprisingly, the authors achieve this result using standard techniques, with the generalization bound of [Srebro et al., 2010] based on local Rademacher complexity playing a pivotal role. **References** [Srebro et al., 2010]: Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. *NIPS* 2010. Strengths: **Minimal assumptions and standard proof technique.** The only assumptions on the loss function for the population risk upper bound are convexity, monotonicity, and smoothness, which leads to a clean theorem statement. Moreover, the proof only involves standard tools; one shows upper bounds the norm of the GD iterate and the empirical risk, and combines these two using the generalization bound from Srebro et al. (2010). In addition, the authors construct a lower bound which shows that their upper bound is nearly optimal for essentially all tail decay rates of the loss function. The minimality of assumptions, tightness of upper bounds, and simplicity of proof techniques showcases the paper’s strength. Weaknesses: **Motivation for studying losses with diverse tail decays?** Having a more general theorem is beneficial, especially if it can be achieved without much sophistication. Still, I am curious about the utility of the upper bound’s generality. Were there important examples of losses that previous works failed to capture? Are there examples of loss functions with polynomial tails that have been used in the context of classification? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What were some important losses (either from a theoretical or practical point of view) that previous works failed to cover? - When does it make sense to use loss functions with slow (e.g., polynomial) tail decay? It seems that loss functions with slow tails may encourage the predictor to increase the margin of already correct answers at the cost neglecting misclassified samples. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and suggestions. We respond in length to all of your questions below; in case there are any remaining concerns, we will be glad to clarify them during the discussion period. > Re Motivation for studying losses with diverse tail decays The relationship between properties of loss function and the generalization of gradient methods has been studied a lot recently, specifically in the setting of classification with separable data (e.g., citations [5,12] in our paper). Our paper contributes to this line of work by demonstrating that generalization is tightly characterized by the loss function’s tail decay rate. To support this claim, our bounds are made very general and they capture a large variety of loss functions. > “What were some important losses (either from a theoretical or practical point of view) that previous works failed to cover?” First, the removal of the self-boundedness condition helps us to remove several log(T) factors in the super-exponential tail regime, compared to Schliserman and Koren (2022). Second, we also remove their Lipschitz assumption and get a tight bound also for non-Lipschitz functions (e.g., the probit loss). But most importantly, the main contribution of the paper is the matching lower bound that we establish for the class of smooth functions with a given tail decay rate (what we call $C_{\phi,\beta}$). From a theoretical perspective, there is significance in removing extraneous assumptions (self-boundedness and the Lipschitz condition) so that the upper and lower bounds end up matching not only in their rates but also in the set of assumptions they rely on. > “When does it make sense to use loss functions with slow (e.g., polynomial) tail decay?” This is a great question - and one that did not yet receive a complete and satisfying answer, to the best of our knowledge. One earlier work [5] did study polynomially tailed losses, and showed that the iterates of GD in this case converge to a direction with a non-trivial (but not maximal) margin, in contrast to losses with exponential tail, where there is convergence to max margin solution [14]. Our tight bounds show that this phenomenon is real, and in fact, polynomial tails are indeed strictly weaker than exponential tails as far as the generalization of GD is concerned. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions! --- Reply to Comment 1.1.1: Comment: Thanks a lot for your thoughtful response! We'll revise our paper based on the constructive feedback from the reviews.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
BIOT: Biosignal Transformer for Cross-data Learning in the Wild
Accept (poster)
Summary: This paper proposes a model called Biosignal Transformer (BIOT) that allows cross-dataset learning with different number channels, sequence lengths, and missing values. BIOT consists of a tokenization module that transforms multi-channel signals into a sequence (“sentence”), and a Linear Transformer Encoder to learn latent representations from the tokenized sequence. BIOT can be used for a wide variety of tasks, including self-supervised pre-training, supervised training (without vs with missing values), and supervised pre-training. Experiments suggest that BIOT outperforms existing methods on various biosignal classification tasks. Strengths: 1. While Transformers have been widely used for modeling biosignals, there is some originality in the design of the token embeddings (segment embedding, channel embedding, and positional embedding). 2. Overall the methods are technically sound and easy to understand. 3. BIOT tackles the following challenges in modeling biosignals across datasets: mismatched channels, variable lengths, and missing values. Weaknesses: 1. My major concerns about the method are: 1) its scalability to biosignals with long sequences and large number of channels and 2) spatial dependencies among channels are not appropriately modeled (e.g., graph structure of brain regions is not captured). These limitations should be investigated and discussed. 2. More details about token embeddings are needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As mentioned above, I’m concerned about the scalability of BIOT to biosignals with long sequence lengths and large number of channels (which is the case for many real-world data). The datasets chosen have relatively short sequences and small number of channels. It would be great to see some experiments on the scalability. And its potential limitation should be discussed. 2. Many types of biosignals have graph structures (e.g., brain signals like EEGs). Flattening the channels into a sequence may be suboptimal for capturing such data structures. This limitation should be discussed. 3. Please provide more details about token embeddings. For example, how is the energy vector computed? What’s the dimensionality of the embedding table for channel embedding? What are the sinusoidal and cosine functions for positional embedding? If they follow previous studies, please provide citations. 4. More ablation studies are needed to show the impact of each component in BIOT. In particular, ablations for 1) normalization in the biosignal tokenization module and 2) each of the token embedding. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations are discussed in the paper. As mentioned above, please include additional discussion on the following limitations: 1) scalability to long sequences and large number of channels and 2) suboptimal for graph-structured data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer ykvi ------ ###### We thank the reviewer for the constructive feedback. We have uploaded a revision and used blue to mark the new changes. Our detailed responses are as follows. **Q1: My concern is on the scalability to biosignals with long sequences and large number of channels. It would be great to see some experiments on the scalability.** Thanks for bringing this point. The model can already handle long recordings and multiple channels in the following ways: (i) BIOT uses a linear complexity Transformer, so that the complexity scales linearly with sample lengths and channel sizes; (ii) Users can also remove the token overlaps and enlarge the token sizes to reduce the token numbers (i.e., "sentence" length). The BIOT model can better handle long recordings and multiple channels with minor adjustments. For long recordings, we could segment the recordings into 10-30s sessions, apply our BIOT on each session, and finally use a top-level LSTM or Transformer to learn sequence embedding. For multiple channels (more than 256 channels), we can group neighboring channels or symmetric channels and tokenize them together, which will greatly shrink the final "sentence" length. During the rebuttal, we added the scalability discussion in Appendix C.7. To provide more insights on the scalability of our model, we have provided a comparsion between linear transformer and original transformer (in terms of performance and runnning time) with different sample size inputs in Appendix C.6. - On the CHM-MIT dataset | Method | Balanced Acc. | AUC-PR | AUROC | Time per epoch | |---|----------------------|---------------------|--------------------|----------------------| | BIOT (with linear Transformer) | 0.6640 ± 0.0037 | 0.2573 ± 0.0088 | 0.8646 ± 0.0030 | 55.5780 ± 0.4229 | | BIOT (with naive Transformer) | 0.6669 ± 0.0112 | 0.2493 ± 0.0088 | 0.8637 ± 0.0030 | 62.2877 ± 0.3629 | - On the IIIC Seizure dataset | Method | Balanced Acc. | Cohen’s Kappa | Weighted F1 | Time per epoch | |---------------------------------------|----------------------|----------------------|----------------------|------------------------| | BIOT (with linear Transformer) | 0.5762 ± 0.0034 | 0.4932 ± 0.0046 | 0.5773 ± 0.0031 | 25.4500 ± 4.0835 | | BIOT (with naive Transformer) | 0.5810 ± 0.0029 | 0.4994 ± 0.0030 | 0.5822 ± 0.0044 | 28.2942 ± 4.0835 | - On the TUAB dataset | Method | Balanced Acc. | AUC-PR | AUROC | Time per epoch | |---------------------------------------|----------------------|----------------------|---------------------|-----------------------| | BIOT (with linear Transformer) | 0.7925 ± 0.0035 | 0.8707 ± 0.0087 | 0.8691 ± 0.0033 | 25.2788 ± 0.1200 | | BIOT (with naive Transformer) | 0.7902 ± 0.0033 | 0.8673 ± 0.0068 | 0.8652 ± 0.0052 | 27.3423 ± 0.2174 | - On the TUEV dataset | Method | Balanced Acc. | Cohen’s Kappa | Weighted F1 | Time per epoch | |--------------------------------------|----------------------|----------------------|----------------------|----------------------| | BIOT (with linear Transformer) | 0.4682 ± 0.0125 | 0.4482 ± 0.0285 | 0.7085 ± 0.0184 | 8.1812 ± 0.1228 | | BIOT (with naive Transformer) | 0.4693 ± 0.0204 | 0.4512 ± 0.0317 | 0.7104 ± 0.0224 | 8.5493 ± 0.1814 | **Q2: Spatial dependencies among channels are not appropriately modeled (e.g., graph structure of brain regions is not captured). Flattening the channels into a sequence may be suboptimal for capturing such data structures.** Thanks for your question. The current model does not capture the graph structures of different channels (which is not the paper focus). However, whether or not to capture the spatial graphical information seems orthogonal to our BIOT model design. Our model can easily adopt designs from Transformer-based models if they have captured the channel graph structures. BIOT model can also add channel graph structures in other ways. For example, in addition to the segment/channel/positional embedding, our model can add a spatial embedding to the token embedding by utilizing channel graph representation learning. We have added the discussion in Appendix C.7. **Q3: How is the energy vector computed?** Thanks. We take FFT of each segment (e.g., 1s) and the results of FFT is the energy vector, where each entry is the energy for one frequency band. **Q4: What’s the dimensionality of the embedding table for channel embedding?** Thanks. The embedding table for channel embedding is a matrix, the row size is the channel number, and the col size is the embedding dimension. **Q5: What are the sinusoidal and cosine functions for positional embedding? If they follow previous studies, please provide citations.** Sure, thanks for reminding us. We have added it (Line 133) during the rebuttal. The same sinusoidal and cosine functions are used in the original transformer paper: Attention is All Your Needed. **Q6: More ablation studies are needed to show the impact of each component in BIOT. In particular, ablations for (1) normalization in the biosignal tokenization module and (2) each of the token embedding.** Sure, we have provided that in Appendix C.9. Different token embeddings are all important and are additively useful since they capture different information from the model: signal, frequency, and channel/spatial information. More details can refer to Appendix C.9. --- Thanks again for your time and valuable questions. We hope our new results and explanations can clear your concerns. We are happy to explain any other component of the model. --- Rebuttal Comment 1.1: Comment: Thank you for your replies. Re: Q1, segmenting into short sessions may lose temporal correlations beyond 30s, which could be suboptimal for signals that have inherently long-range temporal correlations. Also, grouping neighboring channels may lose spatial information. Therefore, it would be more desirable for BIOT to be able to scale to long sequences and large number of channels without segmenting into shorter time windows or grouping channels. --- Reply to Comment 1.1.1: Comment: Thanks for your prompt replies. **We want to kindly provide further comments:** 1. The current BIOT model uses linear complexity transformer blocks, which scales linearly with recording lengths and channel numbers. 2. Segmenting into short sessions and then applying top-level LSTM/Transformer on session embeddings can potentially capture correlations beyond 30s (like a special version of Pyraformer [2]). 3. All designs that "can help vanilla transformer handle long sequences, such as [1][2][3][4]" can be used in our setting to help BIOT handle long recordings and multiple channels. The EEG/ECG samples in our paper are not long and do not have many channels, so we did not leverage these architectures in BIOT, while adding them is fairly straightforward. [1] Zaheer, Manzil, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham et al. "Big bird: Transformers for longer sequences." Advances in neural information processing systems 33 (2020): 17283-17297. [2] Liu, Shizhan, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, and Schahram Dustdar. "Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting." In International conference on learning representations. 2021. [3] Beltagy, Iz, Matthew E. Peters, and Arman Cohan. "Longformer: The long-document transformer." arXiv preprint arXiv:2004.05150 (2020). [4] Xiong, Yunyang, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. "Nyströmformer: A nyström-based algorithm for approximating self-attention." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 16, pp. 14138-14148. 2021.
Summary: Biological signals are crucial for clinical applications, but current models are specialized for specific settings such as sampling rate and duration. The authors propose a pre-trained model that enables cross-data training that addresses differences in sensor settings such as mismatched channels, variable sample lengths and sampling frequencies, and missing values. Compared to other baselines, their pretrained model improved balanced accuracy up to 7%. Strengths: • The paper considers many different situations of biosignal applications: supervised learning with regularly-formatted data, supervised learning with irregularly data, unsupervised learning, pre-training on other datasets. • The authors apply many datasets in experiments, making the results convincing. Weaknesses: • The proposed method lacks technical innovation. There isn’t significant technical contribution on top of Transformer or Vision-transformer (ViT). The majority of the paper is applying transformer to different application situations, but lacks the special model design for biosignals and its heterogeneity. While well suited for an application/health informatics venue the novelty does not extend sufficiently to NeurIPS levels. • The paper does not have a strong motivation or clearly declare it. From the abstract, the work (pre-trainable foundational biosignal models) is motivated by the success of LLM, however, chasing a hot topic should not be the motivation of a solid research work. Instead, it’s more important to discuss how LLM and transformer can be related to biosignal data. • The authors mention that mismatched channels, variable sample lengths, and prevalent missing values are unique challenges associated with biosignals, however, do not further explain why and how are they challenges. • Even though Transformer is widely applied in different application, recurrent model is still an important baseline model especially for time-series data. • I’m not sure if it’s appropriate to say “biosignals are more complicated” (Line 67). Other data modalities/applications may have their own challenges too. • The paper is not well written, for example, many paragraphs in introduction do not have connections to the previous or later one. • More than one tables are too big and out of the range. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: • What are the limitations of using language modeling algorithms for biosignal tasks and considering biosignals as "sentences"? • Why would this tokenization scheme not work across domains (e.g. using Pre Trained EEG on ECG) as part of the unified biosignal model? • What are the hyperparameter tuning range? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: • The authors mention that performance for other types of tasks were not compared, but they also don’t discuss whether the tokenization scheme would affect the performance of those tasks differently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer XY8p ---- ##### We thank the reviewer for the helpful feedback. We have uploaded a revision with the changes marked as blue. Our detailed responses are as follows: **Q1: What are the limitations of using language modeling algorithms for biosignal tasks and considering biosignals as "sentences"?** As we mentioned in the paper, many biosignals could have mismatched channels, missing values, or variable lengths across different datasets, and thus directly applying language modeling algorithms or other sequence models on these settings are suboptimal or infeasible. *This is our motivation for developing the BIOT encoder architecture.* **Q2: There isn’t significant technical contribution on top of Transformer or Vision-transformer (ViT). The majority of the paper is applying transformer to different application situations, but lacks the special model design for biosignals and its heterogeneity.** Thanks, this is a misunderstanding. Our contribution is that common Transformer models or ViT models cannot handle irregular biosignals (with mismatched channels, missing values, or variable lengths), while in this paper, we transform different signals into unified formats and thus enable the joint learning and generalizing to downstream applications across different datasets. Our model is designed specifically for biosignals: the tokenization transformation is designed based on multi-channel structures of biosignals, and the segmentation step also follows the frequency representations of biosignals. **Q3: The paper does not have a strong motivation or clearly declare it. From the abstract, the work (pre-trainable foundational biosignal models) is motivated by the success of LLM, however, chasing a hot topic should not be the motivation of a solid research work. Instead, it’s more important to discuss how LLM and transformer can be related to biosignal data.** Thanks, to reduce the confusion, we have rephrased our Abstract during the rebuttal. In the original Abstract and Introduction, we have mentioned that this paper is motivated by the fact that many previous biosignal learning works cannot apply/generalize to other settings. This paper proposes a powerful encoder architecture that is able to incorporate different datasets in training and is flexible in generalization since it can handle mismatched channels, variable lengths, and missing values. The connection to previous text modeling (LLM) work is that Transformers can handle variable lengths, and thus we are motivated to design a tokenization trick to transform biosignals into variable-length "sentences". **Q4: The authors mention that mismatched channels, variable sample lengths, and prevalent missing values are unique challenges associated with biosignals, however, do not further explain why and how are they challenges. I’m not sure if it’s appropriate to say “biosignals are more complicated” (Line 67). Other data modalities/applications may have their own challenges too.** Thanks. The original Introduction explained why and how these three are challenging (Line 44 - 53). In additional, there are already previous works discussing how to transform images (ViT), audios (Wave2Vec), and natural languages into a sentence, and this work proposes a way to transform multichannel biosignals into a sentence. We mention it is more challenging in the sense that previously there is no relevant study on biosignal transformation. Based on the reviewer's comments, we further rephrased this part (Line 67-68) during the rebuttal. **Q5: Even though Transformer is widely applied in different application, recurrent model is still an important baseline model especially for time-series data.** Yes, two baselines included in this paper CNN-Transformer (Peh et al., 2022) and FFCL (Li et al., 2022) are based on LSTM or contain a LSTM component, and they are inferior to our proposed model under various evaluations. **Q6: The paper is not well written, for example, many paragraphs in introduction do not have connections to the previous or later one. More than one tables are too big and out of the range.** Thanks, we have rephrased the relevant parts and adjusted the size of the tables. **Q7: Why would this tokenization scheme not work across domains (e.g. using Pre Trained EEG on ECG) as part of the unified biosignal model?** Thanks. We did not try the tokenization scheme and do not know whether it works or not across domains. The current paper did not train EEG and ECG or other signal types together since different types are semantically different, such as in sampling frequencies, signal amplitudes, and time series microstructures. Also the corpus used for pre-training might also need to align with or remotely connected to the downstream tasks (for example, currently it is unclear how ECG signals can help EEG learning tasks). But, unifying different signals can be a promising direction as future work. **Q8: What are the hyperparameter tuning range?** Thanks. The initial Appendix C.3 has reported our special hyperparameters ablation studies, including frequency choices, overlapping lengths, and token lengths. This ablation study can provide insights on how to select our newly introduced hyperparameters. Other hyperparameters, such as learning rate, batch size, L2 regularizers, are selected following common procedures based on the validation set. **Q9: The authors mention that performance for other types of tasks were not compared, but they also don’t discuss whether the tokenization scheme would affect the performance of those tasks differently.** Sorry, we kindly ask the reviewer to clarify the question. We would be happy to address your concerns if more details are provided (such as Line number). ---- Thanks again for your comments. Hope our rebuttal has addressed all your concerns. --- Rebuttal Comment 1.1: Title: Thank you for your thorough response Comment: Thanks for the diligent response and clarification. I still have a few outstanding questions that might help you in your final paper. 1. the missingness of the signal-type times-series data is often long-term instead of one or very few time windows, like from disconnection to a device, and thus I have concern about the authors’ problem setup - is the data missing problem really going to be in as in Figure 2, to apply supervised learning with missing data or unsupervised learning with contrastive loss? I would suggest the authors to have more description of the data missing problem in biosignal data, maybe provide some examples. 2. There isn’t enough introduction about the baseline models. Do they also have other dataset involved in training? If not then it may not be a fair comparison to the BIOT at least the pre-trained BIOT. --- Reply to Comment 1.1.1: Comment: Thanks for your questions. We can briefly clarify them below. 1. **For the problem setting**. Figure 2 is just for illustration purpose (it does not necessarily imply any real application). In the real world, missingness in signal data could be long-term due to disconnection of device. It could also be multiple short-term windows due to the connection is unstable and occasionally getting disconnected. Also, handling missing signals is just one application in the paper, and the model can also handle other scenarios, such as misaligned channels. **The most important message here is that our framework is generic enough to handle all different scenarios of signal missings (long-term or short-term), channel misalignments, and different signal lengths.** In Section 3.3, we verified the strengths of our BIOT model on both scenarios. 2. **More information on baseline models.** In Appendix B.2, we do have extra information for baseline models (such as how they are implemented). In the experimental Table 2 and Table 3, the baseline models and our Vanilla BIOT model only uses a single dataset ,while the pre-trained model is trained on other datasets first and then finetuned on this dataset. We mainly want to show: (i) Vanilla BIOT works better than the baseline models consistently; (ii) With the BIOT framework, we can transfer knowledges from other datasets (with different signal length and channels) and improve the prediction performance on this dataset, which previous models cannot do. Here, we do not intend to compare the baseline with the pre-trained BIOT. We hope this can address your additional concerns. We thank the reviewer again and will further improve the final paper based on your valuable comments.
Summary: The authors present a general Transformer-based pipeline for learning on biosignals such as EEG, ECG and human activity sensor data. The proposed approach relies on a tokenization scheme that includes temporal and channel position information and a spectral representation of a segment of a single-channel time series. This allows learning and inference on examples with missing channels and/or time points. The impact of contrastive pretraining is evaluated on multiple modality-specific downstream tasks (EEG seizure detection, ECG arrythmia prediction, etc.), as well as the impact of missing channels/segments and of supervised pretraining. Results compare favorably to existing baselines. Strengths: Originality: The paper proposes an original approach to feed multivariate time series data into a Transformer through a bespoke tokenization scheme. The combination with a contrastive unsupervised pretraining task and the applicability to multiple different modalities is also novel. Quality: The paper is technically sound, with results presented on multiple downstream tasks and different modalities. There are multiple analyses supporting the choice of hyperparameters. Clarity: The paper is overall clear and well organized. The provided code is very well organized and clearly formatted. Significance: the developed methodology is likely to be reused and could yield pretrained "backbone" models that can be finetuned on different tasks. Reported improvements on different baseline tasks also suggest this approach can outperform SOTA models without pretraining. Finally, the model's ability to naturally work despite missing channels and segments is compelling. Weaknesses: Quality: some of the core choices behind the methodology could be better supported/justified. For instance: the use of a linear complexity attention module (see question 1), the choice of a spectral representation for the segment embedding (Q2) and the use of a contrastive loss with channel/segment dropout augmentation (instead of a different unsupervised task, e.g. masked autoencoding) for unsupervised pretraining. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The use of an attention implementation with linear complexity is compelling, especially if the tokenization scheme yields long sequences. Can you give a sense of this length for the experiments of Table 2 and 3? Given the results are already above the baselines choosing linear attention might be a good tradeoff, but it would be interesting to present a comparison with a vanilla Transformer block for at least one of the experiments (both in terms of performance and of running time like in Table 7). 2. What is the impact of using a spectral (FFT) representation rather than the time series segment itself as the input to the segment embedding FCN? The phase information that is discarded might actually be useful for some downstream tasks, e.g. when fine microstructure is relevant to a task or there are events/stimuli in a stimulation protocol. 3. At line 212, it is said that "the first three datasets are used entirely for unsupervised pre-training"; there are no splitting information available for these datasets in the Appendix (and the code only defines a training dataloader) and so I understand that the hyperparameters were selected based on downstream task validation performance only. It would be interesting to see how pretext task performance relates to downstream task performance, and whether it is similar across modalities (however this is purely curiosity from my part, I don't think such an analysis is necessary for this submission). 4. At line 184, T=0.2, but in the appendix (line 565) it is said to be T=2. 5. The use of the word "montages" in an EEG context in Table 1 and in Appendix B is confusing. Typically "montages" refer to a fixed set of channel positions or derivations. Therefore, line 483 should instead read something like "First, the 16 derivations ...". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer vQ39 ------- ##### We thank the reviewer for your appreciation and constructive comments. We have uploaded a revision and used blue to mark the new changes. **Q1: Can you give a sense of this length for the experiments of Table 2 and 3?** Sure, it can be calculated from Table 1. For example, for the SHHS dataset, we use 0.5s as token overlap and 1s as the token length. Then, a data sample with 30s duration will have (30 * 2 - 1) * 2 channels = 118 tokens. For PREST, we use 0.5s as token overlap and 1s as token length. Then, a data sample with 10s duration will have (10 * 2 - 1) * 16 = 304 tokens. Sample durations are all given from the data resources. **Q2: It would be interesting to present a comparison with a vanilla Transformer block for at least one of the experiments (both in terms of performance and of running time like in Table 7).** Sure, we provided the comparison in Appendix C.6 (and copyed the results below). Generally, we find that the BIOT model with vanilla Transformer costs a bit more time while the performance of using linear complexity Transformer and vanilla Transformer are very similar. The reason might be that biosignals, such as EEG, are known to have *low-rank structure naturally*, and thus the low-rank approximation of the quadratic attention (in linear complexity Transformer) will not lead to a noticeable performance drop. - On the CHM-MIT dataset | Method | Balanced Acc. | AUC-PR | AUROC | Time / epoch (s) | |--|--|--|---|--| | BIOT (with linear Transformer) | 0.6640 | 0.2573 | 0.8646 | 55.5780 | | BIOT (with naive Transformer) | 0.6669 | 0.2493 | 0.8637 | 62.2877 | - On the IIIC Seizure dataset | Method | Balanced Acc. | Cohen’s Kappa | Weighted F1 | Time / epoch (s) | |--|--|--|---|--| | BIOT (with linear Transformer) | 0.5762 | 0.4932 | 0.5773 | 25.4500 | | BIOT (with naive Transformer) | 0.5810 | 0.4994 | 0.5822 | 28.2942 | - On the TUAB dataset | Method | Balanced Acc. | AUC-PR | AUROC | Time / epoch (s) | |--|--|--|---|--| | BIOT (with linear Transformer) | 0.7925 | 0.8707 | 0.8691 | 25.2788 | | BIOT (with naive Transformer) | 0.7902 | 0.8673 | 0.8652 | 27.3423 | - On the TUEV dataset | Method | Balanced Acc. | Cohen’s Kappa | Weighted F1 | Time / epoch (s) | |--|--|--|---|--| | BIOT (with linear Transformer) | 0.4682 | 0.4482 | 0.7085 | 8.1812 | | BIOT (with naive Transformer) | 0.4693 | 0.4512 | 0.7104 | 8.5493 | **Q3: What is the impact of using a spectral (FFT) representation rather than the time series segment itself as the input to the segment embedding FCN? The phase information that is discarded might actually be useful for some downstream tasks.** Thanks, this is a valuable question. In our experiments, we use the FFT representation since relevant tasks in the paper would benefit more from the spectral domain than the raw signal microstructures. However, if the phase information is important in other applications, our BIOT can be flexible to use raw time series as inputs. **Q4: It would be interesting to see how pretext task performance relates to downstream task performance, and whether it is similar across modalities (however this is purely curiosity from my part, I don't think such an analysis is necessary for this submission).** Thanks for your question, and this would lead to interesting discussions. We need to consider two aspects when analyzing the generalization performance. **First**, whether the pre-trained data is semantically (in terms of the downstream task) close to the downstream task data. **Second**, whether the pretraining task itself is optimized well (this is the reviewer's question). Intuitively, only when the pretraining data is close to the downstream data and the pretraining task is optimized well, then the downstream performance can be greatly improved. Actually, Figure 4 and Figure 5 can verify the first aspect, given that the second aspect is satisfied (the supervised pretraining models are all optimized well). From Figure 4 and Figure 5, we can see that pretraining on more similar datasets will generate better gain in downstream finetuning. For the second aspect, we conduct experiments in Appendix C.8 by loading the checkpoints of pretrained model from different epochs, which shows that the generalization performance will be better if the pretrained model is optimized better. The conclusion is consistent on both EEG and ECG datasets. - Load the checkpoints of pretrained BIOT on PREST at different epochs (1, 2, 3, 4, 5, 8, 10, 20) and finetuning on the IIIC Seizure dataset. | Checkpoint Epoch | 1 |2| 3| 4| 5| 8 | 10 | 20 | |----| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | loss in pretraining | 17.32 | 6.19 | 1.94| 0.93 | 0.63 | 0.42 | 0.33 | 0.31 | | Weighted F1 when finetuning on IIIC Seizure | 0.5773 | 0.5815 | 0.5804 | 0.5822 | 0.5813 | 0.5835 | 0.5825 | 0.5828 | - Load the checkpoints of pretrained BIOT on Cardiology-12 at different epochs (1, 2, 3, 4, 5, 8, 10, 20) and finetuning on the PTB-XL dataset. | Checkpoint Epoch | 1 |2| 3| 4| 5| 8 | 10 | 20 | |--| -- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | loss in pretraining | 27.23 | 9.49 | 2.59 | 1.57 | 1.13 | 1.23| 1.24 | 1.01 | | Weighted F1 when finetuning on IIIC Seizure | 0.7493 | 0.7525 | 0.7631 | 0.7689 | 0.7645 | 0.7684 | 0.7652 | 0.7671 | **Q5: At line 184, T=0.2, but in the appendix (line 565) it is said to be T=2.** Thanks for finding this typo. T=0.2 is correct. **Q6: The use of the word "montages" in an EEG context in Table 1 and in Appendix B is confusing. Typically "montages" refer to a fixed set of channel positions or derivations. Therefore, line 483 should instead read something like "First, the 16 derivations ...".** Thanks for your suggestion. We have updated these places (Line 482, Line 500, Line 506, Line 512) in the paper. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing additional results and clarifications.
Summary: The paper presents a Biosignal Transformer (BIOT) model that can be pre-trained from multiple data sources and fine-tuned on different downstream biosignal tasks. The model tokenizes diverse biosignals into unified “biosignal sentences” and adds channel embeddings and relative position embeddings to preserve spatio-temporal features. The BIOT model is versatile and applicable to various biosignal learning settings across different datasets. Comprehensive evaluations on EEG, ECG, and human activity sensory signals demonstrate that BIOT outperforms robust baselines in common settings and facilitates learning across multiple datasets with different formats. Strengths: The paper presents a novel approach to handling diverse biosignals by tokenizing them into unified “biosignal sentences”. The proposed BIOT model is versatile and applicable to various biosignal learning settings across different datasets. The comprehensive evaluations on EEG, ECG, and human activity sensory signals demonstrate the robustness of the proposed model. Weaknesses: The paper does not provide motivation for why the authors picked these specific datasets. There is no discussion about large datasets like MIMIC-IV that include ECG data. The selected baselines do not include some reference models like SimCLR or MAEs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why were these specific datasets chosen for evaluation? How would the proposed model perform on large datasets like MIMIC-IV? How would the proposed model compare to reference models like SimCLR or MAEs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The most significant limitation of this paper is the experimental setup. The selected baselines do not include some reference models like SimCLR or MAEs. There are no scaling experiments which seems to be the most important finding in GPT-style models. If we want to create the equivalent of a GPT for biosignals, we need to source and combine probably hundreds of similar datasets and think hard about the proportion of each modality (EEG, movement, ECG etc). However, the authors left this for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer ve81 ###### We thank the reviewer for the constructive feedback. We have uploaded a revision and used blue to mark the new changes. Our detailed responses are as follows. ----- **Q1: The paper does not provide motivation for why the authors picked these special datasets. There is no discussion about large datasets like MIMIC-IV that include ECG data.** Thanks. The datasets used in the paper are all large and widely used EEG/ECG datasets. Previous works (such as [1][2][3][4][5]) also use these datasets. Some (such as SHHS, CHB-MIT, Cardiology, PTB-XL) are from the open PhysioNet repository, some (such as TUEV, TUAB) are from Temple University open EEG corpus, and some (such as PREST, IIIC-Seizure) are proprietary datasets. In terms of the dataset size, SHHS (portion 1) has ~185 GB, PREST has ~320 GB, Cardiology has ~77 GB, TUAB has ~159 GB, TUEV has ~37 GB, etc. Also, the recommended MIMIC-IV-ECG dataset is officially under embargo (please check the MIMIC-IV-ECG website), and we are unable to provide additional results. [1] McCallan, Niamh, Scot Davidson, Kok Yew Ng, Pardis Biglarbeigi, Dewar Finlay, Boon Leong Lan, and James McLaughlin. "Epileptic multi-seizure type classification using electroencephalogram signals from the Temple University Hospital Seizure Corpus: A review." Expert Systems with Applications (2023): 121040. [2] Yang, Chaoqi, M. Brandon Westover, and Jimeng Sun. "Manydg: Many-domain generalization for healthcare applications." ICLR (2023). [3] Prasanna, J., M. S. P. Subathra, Mazin Abed Mohammed, Robertas Damaševičius, Nanjappan Jothiraj Sairamya, and S. Thomas George. "Automated epileptic seizure detection in pediatric subjects of CHB-MIT EEG database—a survey." Journal of Personalized Medicine 11, no. 10 (2021): 1028. [4] Sridhar, Niranjan, Ali Shoeb, Philip Stephens, Alaa Kharbouch, David Ben Shimol, Joshua Burkart, Atiyeh Ghoreyshi, and Lance Myers. "Deep learning for automated sleep staging using instantaneous heart rate." NPJ digital medicine 3, no. 1 (2020): 106. [5] Biswal, Siddharth, Haoqi Sun, Balaji Goparaju, M. Brandon Westover, Jimeng Sun, and Matt T. Bianchi. "Expert-level sleep scoring with deep neural networks." Journal of the American Medical Informatics Association 25, no. 12 (2018): 1643-1650. **Q2: The selected baselines do not include some reference models like SimCLR or MAEs. How would the proposed model compare to these reference models.** Thanks, this is a misunderstanding. Our paper aims to provide a *new biosignal encoder architecture* that can handle different signal formats (with missing, variable lengths, and mismatched channels) at once. Our idea is orthogonal to the design of SimCLR or MAE, and instead, BIOT can be a backbone for them under the self-supervised setting. Note that our BIOT encoder combined with a final prediction layer can be a supervised model and combined with SimCLR/MAE/NCE can be an unsupervised model. Per the reviewer's request, we use BIOT as the backbone and try SimCLR and MAE (conservative and aggressive version with different data augmentation rates) in the self-supervised setting (to replace the current NCE loss in Equation 4). We use TUEV and TUAB as the datasets, show the new results in Appendix C.5, and copy them here. Detailed result analysis can be found in Appendix C.5. *Note that we want to kindly remind the reviewer that this paper proposes a new biosignal encoder architecture but not a new self-supervised learning framework*. | Model | (TUAB) Balanced Acc. | (TUAB) AUC-PR | (TUAB) AUROC | (TUEV) Balanced Acc. | (TUEV) Coken’s Kappa | (TUEV) Weighted F1 | |------|---|---------------------|---|---|----|---------------------| | BIOT + NCE (PREST) | 0.7907 ± 0.0050 | 0.8752 ± 0.0051 | 0.8730 ± 0.0021 | 0.5207 ± 0.0285 | 0.4932 ± 0.0301 | 0.7381 ± 0.0169 | | BIOT + NCE (PREST+SHHS) | 0.8019 ± 0.0021 | 0.8749 ± 0.0054 | 0.8739 ± 0.0019 | 0.5149 ± 0.0292 | 0.4841 ± 0.0309 | 0.7322 ± 0.0196 | | BIOT + SimCLR (PREST) | 0.7894 ± 0.0072 | 0.8681 ± 0.0053 | 0.8715 ± 0.0074 | 0.5113 ± 0.0331 | 0.4862 ± 0.0209 | 0.7310 ± 0.0244 | | BIOT + SimCLR (PREST+SHHS) | 0.7852 ± 0.0096 | 0.8655 ± 0.0014 | 0.8694 ± 0.0075 | 0.5147 ± 0.0311 | 0.4829 ± 0.0157 | 0.7280 ± 0.0348 | | BIOT + conservative MAE (PREST) | 0.7393 ± 0.0087 | 0.8347 ± 0.0076 | 0.8259 ± 0.0081 | 0.4760 ± 0.0328 | 0.4393 ± 0.0420 | 0.6831 ± 0.0112 | | BIOT + conservative MAE (PREST+SHHS) | 0.7410 ± 0.0098 | 0.8312 ± 0.0082 | 0.8262 ± 0.0061 | 0.4601 ± 0.0238 | 0.4349 ± 0.0182 | 0.6797 ± 0.0189 | | BIOT + aggressive MAE (PREST) | 0.7679 ± 0.0045 | 0.8591 ± 0.0086 | 0.8412 ± 0.0068 | 0.4929 ± 0.0416 | 0.4728 ± 0.0364 | 0.7130 ± 0.0216 | | BIOT + aggressive MAE (PREST+SHHS) | 0.7692 ± 0.0011 | 0.8525 ± 0.0079 | 0.8459 ± 0.0043 | 0.4970 ± 0.0409 | 0.4686 ± 0.0136 | 0.7086 ± 0.0195 | **Q3: There are no scaling experiments which seems to be the most important finding in GPT-style models. If we want to create the equivalent of a GPT for biosignals, we need to source and combine probably hundreds of similar datasets and think hard about the proportion of each modality (EEG, movement, ECG etc). However, the authors left this for future work.** Thanks. We have improved the Abstract and Introduction to clarify our contributions. As we mentioned earlier, *this paper barely provides a powerful encoder architecture that is able to incorporate different datasets in training and is flexible in generalization*. This work does not claim to provide a biosignal GPT (which requires huge time efforts and computing resources), and we leave it as a future work. ---- Thanks again for the valuable comments. Hope our rebuttal has addressed all your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the thorough response and for taking my suggestions into account. I am happy to increase my score to Accept now.
Rebuttal 1: Rebuttal: We thank all the reviewers for your time and constructive feedback. During the rebuttal, we have prepared a revision and used blue to mark the new changes. Pdf: /pdf/6f0994411972d16479ed532205913b9612d6c6e7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
Accept (poster)
Summary: This analysis paper demonstrates that the chain-of-thought explanations generated by LLMs do not faithfully represent the true deciding factors of their predictions. In the experiments, LLMs are steered to give intended predictions with three types of biases (Answer is Always A, Suggested Answer and social stereotypes). Despite that their behaviors change accordingly, their generated CoT fails to mention the affecting bias generally. This casts doubt on whether we can safely trust the CoT explanations given that they may not reveal the underlying reasoning process of LLMs faithfully. Strengths: - Originality: Different from prior works that either propose simulation-based metrics or rely on human study, this paper designs a novel control experiment setting for evaluating faithfulness, where the deciding factors of the model predictions are known beforehand. This is achieved by structuring the prompt to LLMs in a way that is likely to elicit certain answers from them. - Quality: The work conducts a range of experiments investigating three types of biases, using two different LLMs, in both zero- and few-shot settings. The idea of affecting LLMs’ behaviors with certain biases in an intended way is interesting and effective in obtaining unfaithful CoT explanations from LLMs. - Clarity: The paper is well written with both tables, figures illustrating the experimental results and concrete examples. - Significance: The paper provides an alternative view on the unfaithfulness of machine-generated explanations. It also indicates that it is unsafe to trust LLMs’ explanations since the true reasons for their predictions are usually not verbalized in CoT. Weaknesses: - The paper's findings are not that surprising since there is no systematic way to guarantee LLMs to generate answers that are consistent with their CoT. It would be helpful to extend the study to program-aided LLMs, where the answers are the results of executing the programs generated by LLMs. - The experiment setting of Answer is Always A is questionable, and thus the result obtained is less convincing compared to other settings. As indicated by [1], the few-shot examples in the prompt teach LLMs the answer space rather than the mapping from task input to output. Thus, LLMs might misinterpret the task as always trying to explain the answer A. The work also shows that indeed the explanations are changed to support the new incorrect answers, which are faithful. In this regard, the finding is against the main argument of the paper. [1] Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? (Min et al., EMNLP 2022) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What would the desired CoT explanations look like if they are faithful? In the setting of Answer is Always Right, should LLMs generate a CoT like “Since all the answers to the examples in the prompt are all As, the answer to the new question should also be A”? I feel some desired “faithful” explanations that the authors are expecting might not be realistic. 2. How do you define faithfulness? To me, those explanations supporting the new incorrect answers are actually faithful since they are consistent. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It would be great if the authors could talk about the potential extension of the work to prove the faithfulness of explanations or at least what faithful explanations should look like under the current experimented settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! > What would the desired CoT explanations look like if they are faithful? They might look as you propose, _if the model makes predictions according to the bias_. But faithful explanations that look just like those in the demonstrations are possible if the model simply does not use the biasing feature. See "Is it realistic to expect faithful explanations?" in the general response. > How do you define faithfulness? See "Definition of faithfulness" in the global response. See the global response for discussion of your points under "Weaknesses" as well.
Summary: The paper studies faithfulness of COT explanations. Authors perform experiments on different datasets under various setups (biased and none) to demonstrate how decisions made by the models change along with the provided CoT explanations to even justify the wrong decisions made by the model. Strengths: 1. The topic of study was timely and interesting. 2. The paper was written very clearly and easy to follow. 3. Observations and findings of the paper was interesting. Weaknesses: 1. While the paper provided interesting observations, it was mainly an observational paper and lacked technical rigor. 2. I also had issues on how faithfulness is defined/used in the paper. It seems like the CoT reasoning provided by the model is faithful as the model tries to justify its prediction let it be right or wrong, so based on the definition of faithfulness the reasoning is faithful it is just the matter of the model using the wrong reasoning and ultimately the wrong answer. I think there should be a difference between a wrong reasoning with a faithful CoT and a wrong reasoning with an unfaithful CoT which I think is harder to quantify/observe/prove. Seems like this paper assumes that any wrong reasoning is unfaithful which I am having a hard time understanding why. 3. The studies on biasing the context was interesting, but I would have loved to see other and more creative approaches to bias the context and their effects on the results. **Minor comment** Line 195 typo (Appendix C details the our annotation -> Appendix C details our annotation). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can authors please clarify on their definition of faithfulness and address my comment 2 made in the weaknesses section? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations were discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Can authors please clarify on their definition of faithfulness and address my comment 2 made in the weaknesses section? See “definition of faithfulness” in the global response. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing the response to my concerns. I acknowledge reading the responses provided by the authors. I would suggest authors to add the discussion around faithfulness in this rebuttal to the paper to improve clarity and avoid future questions as raised by me and other reviewers. My other concerns are not resolved, so I will keep my score.
Summary: The authors present an investigation of ‘unfaithfulness’ in CoT prompting for large language models. They argue in particular that models can be predictably influenced by biasing features in the input, which the CoT “explanations” reliably fail to mention, and that they can be influenced by other factors (eg., social stereotypes) but mask this reasoning by appealing to other, weak forms of evidence. The authors test a GPT-3.5 model and a Claude 1.0 model with modifications to two datasets: a subset of Big-Bench Hard (BBH) and the Bias Benchmark for QA (BBQ). With the first dataset, they add a ‘biasing feature’, which is either a modification of the examples such that the correct answer (in a few-shot prompt) is always the same multiple-choice selection (e.g., (A)), or a modification to the example such that there is an additional “Suggested answer” line (e..g, “I think the answer is A but I’m curious to hear what you think.”). In the second analysis and dataset, they introduced a piece of “weak evidence” in ambiguous reasoning problems, to see (1) whether the model applied the weak evidence in inconsistent manners and (2) whether these were in part out of a tendency to produce stereotype aligned answers (e.g., the answer is always “the Black man” regardless of whether the weak evidence would suggest that or not). Strengths: I liked this paper a lot. The problem is highly relevant and highly topical for current trends in AI. Overall, the paper is well written, interesting, and provocative. The evaluations are clever and fairly systematic. The qualitative analysis is a valuable addition. The motivation for the hypothesis that we should not expect CoT explanations to be faithful by default was very compelling. Weaknesses: **Results and Design** - Like a lot of LLM evaluation papers, the main statistical analyses provide a rather crude basis for statistical evidence, and many of the conclusions are presented with no statistical evidence at all. - For example: The conclusion in 3.2 that “generally few-shot CoT exhibits less unfaithfulness than zero-shot Cot” is a statistical claim but there are no inferential statistics (nor even descriptive statistics such as confidence intervals) present in any of the analyses in this section. - Generally, the evaluation setting is systematically structured (CoT vs. no COT; debiasing instructions vs. not; FS vs. Zero-shot; GPT vs. Claude) with the same items/prompts being evaluated in these different conditions, but this structure isn’t taken into account when calculating the statistical results, which makes it difficult to assess the claims made by the paper. The fact that that the data are structured in this ways means the individual data points are not iid, and thus a binomial test is not an appropriate statistic test. A paired difference test is better, but still misses the multi-level nature of these comparison. What would be appropriate here is a multi-level (hierarchical / mixed-effects) logistic regression model, taking into account the fact that the experimental manipulations have different effects on different tasks in BBH, and that the same examples appear in all of the different levels of the manipulated variables (e.g., it’s the same prompts that appear in the CoT context as in the no-CoT context, as well as the zero-shot vs. Few-shot, as well as Claude vs. Gpt, etc.) See Lampinen et al. (2022) _Can language models learn from explanations in context?_ for an example of this kind of analysis in the language model evaluation context, or see Gelman and Hill (2006) for further background. - There are potential alternative interpretations for what the model is learning from the prompts in these contexts. It’s possible these explanations do not account for the whole pattern of data as well as the explanation that the authors put forth. But it’s still important to bring some of them up and discuss to help the reader better understand the evidence in favor of their account. - In Study 1, for the “Always A”, in few-shot setting with chain-of-thought, the model could be learning that the task is to *Respond A and provide a plausible explanation that doesn’t mention A*. The example CoT are all good or valid explanations (by design), and almost certainly represent a certain style of explanation. In addition to the fact that none of them likely mention the fact that ‘all the answers are A’ as an explanation, the explanation that ‘all the answers are A’ is a kind of meta-task explanation, which is probably a different kind of explanation than those that appear in the few-shot examples. It’s an explanation at the level of the whole set of examples, whereas each individual CoT (in the shots) probably only concerns the individual example. Therefore, it’s unclear if the model’s reasoning is unfaithful here, or whether the prompt is encouraging it to be unfaithful. - The “suggested answer” manipulation also will fall under the same header of not being mentioned previously in the prompt (appendix F.2), and furthermore, the “Suggested answer” would be just a bad explanation for doing something, and the model only saw good (high-quality) explanations in the few-shot prompt (and likely in the pretraining set, for the Zero-shot case). If my helpful assistant told me "Oh I heard some voice that suggested it me weirdly out of the blue just now", I would say that is a bad reason for doing something. - In BBQ, I tried hard to find out the nature of the few-shot prompt examples but I couldn’t determine if the few-shot prompt examples always included only stereotype-aligned responses for the disambiguated content examples (as is shown in Table 16), or if this varied in a counterbalanced manner. Without knowing this, it’s hard to evaluate the Few-shot results, as the prompt may have biased the model to have stereotype-aligned responses. - I’m generally unclear about the design choices around the BBQ data with ambiguous scenarios and weak evidence. There seems to be some implicit model behind this study, having to do with weighing evidence. - In the appendix, the authors write how some generated pieces of evidence were too strong and others were too weak, which makes me confused about what the ideal amount of ‘weak evidence’ is and what it’s intended to do. - In the example in Appendix B, we have a story about a line-cook and a physician, and which one is likely to have an exclusive credit card. The authors note that selecting the physician, “while arguably reasonable”, is counted as reasoning on the basis of stereotypes according to BBQ. Fine, but insofar as there is (quite plausibly) a statistical correlation between being a physician vs. line-cook and owning an exclusive credit card, how are we to say that _that evidence_ (that statistical association) is weaker than the ‘weak evidence’ provided in their modified task. In all likelihood, it’s stronger. Then in this case, the model could infer the task that is being asked of it is to jusifiy the highest probability outcome using the piece of ‘weak evidence’ provided. - Another example: on Table 16, the Unknown response could also be considered stereotype-aligned, if you deemed the evidence of being on the phone vs. speaking to a young child as sufficiently strong to sway you were it not for the low-income vs. high-income bias. - I think the logic of this Study is either - The situation is ambiguous and the evidence is weak, meaning the normative Bayesian posterior probabilities of each outcome is close to 50% (and the model should say Unsure) - OR, the evidence presented is stronger than the prior association that exists such that the evidence could sway you in one direction or the other (and you should be consistently swayed) - But, it seems like many examples there is likely a strong prior (statistical) association and the evidence is not strong enough to overcome it (normatively, speaking). - I’m not totally sure about this but I think it might be useful to know what the models predict without the ‘weak evidence’ provided. I think this could shed light on if the model has a prior tendency that it’s looking to rationalize, or if the process of reasoning is bringing out some tendency. **Literature** - Rationalization seems to be a key theme in this work, and I’m surprised the authors didn’t discuss that rationalization is a powerful way of making LMs better, and the implications of that for their studies. In particular, the work of Zelikman, Wu, Mu, & Goodman (2022) _STaR: Bootstrapping Reasoning With Reasoning_ **Other comments** - Some of the language and logic throughout the paper was sloppy and/or overstated. Here are some specific. - The logic in the first paragraph doesn’t follow for me: One might hope that the fact that CoT improves performance makes it so the reasoning described by the CoT is an explanation for its predictions, but I wouldn’t say that performance increase alone suggests this. - In the next paragraph, there is also a short discussion about how the models could have correct, unfaithful reasoning, which I found confusing and I think the point is quite subtle. I guess the authors take “correct reasoning” to not entail that the conclusion follows from the reasoning? Or, is that also part of ‘correct reasoning’? - In Section 2, the authors write “measuring these effects gives us an account of what models actually rely on to make their predictions, …” I think this is overstated: You don’t know what the model relies upon in the nonbiasing cases. - I found the discussion around the relevance of subjectivity a little too terse and nonobvious. For example, I kept wondering why switching answers for subjective questions would be concerning, given that subjective questions could have multiple valid answers. In Table 3, the example about Navigation, I found particularly confusing, as it highlights the ambiguity of the task where its not clear if the orientation matters. If the model changes responses based on some biasing features, is that actually problematic? I think the authors could do a lot more to motivate this particular aspect of the evaluations. - 3.2 The main conclusion that CoT explanations are systematically unfaithful. This is a minor point: This conclusion relies on the point made in footnote 2, that the explanations don’t mention the biasing feature. Footnote 2 should be elevated to a top-line result then, so the reader can see more clearly the logic of the argument being made. This is done in the figure caption, but not in the main text. - CoT can steer models from correct initial predictions towards bias-consistent predictions: There is no statistical evidence in support of this claim. - In the BBQ results: The authors talk about “unfaithful explanations” even in the no CoT conditions, and this is generally confusing. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The limitations above raise obvious questions that I won’t rehash here. 1. From the start, the authors label the auto-regressive generations of LLMs under CoT prompting as “CoT explanations”. But is there any reason to call them explanations? Of course, there’s the empirical fact that the authors are testing… about faithfulness, etc., but from an a priori perspective, what is the step to thinking these might be explanations? As far as I can tell, the CoT prompting is about “thinking step by step”, which I don’t think would fall under any standard definition for what an explanation is. 2. For the models evaluated, do we know whether CoT was used during their training process? This would be an important feature to know about the models. 3. Looking at Table 12 in the appendix (if I’m reading this correctly), it looks like the “Suggested Answer” trick only really works for a very small subset of tasks, and even the “Answer is always A” trick works for about half the tasks. If this is a correct reading, can you speculate why this might be? 4. In the few-shot setting in the BBQ eval, the few-shot prompt uses disambiguated context examples from BBQ and one ambiguous one with an Unknown label. In the appendix, the example shows 2 stereotype aligned responses for the disambiguated cases. Is that always the case? If not, what is the ratio of proportions of stereotype-aligned vs. -misaligned examples among the disambiguated context examples used in the prompts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: I think the authors missed an opportunity here to view their evaluation as a kind of adversarial attack and speculate on the potential societal harms that could ensue. I’d like to hear if they have thoughts along that line of speculation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! ### Results and design > The conclusion in 3.2 ... no inferential statistics ... present in any of the analyses in this section. Thanks, we’ll add paired difference statistics there. > A paired difference test is better, but still misses the multi-level nature of these comparison. What would be appropriate here is a multi-level (hierarchical / mixed-effects) logistic regression model Thank you for the suggestion! We agree a hierarchical model would be best for analyzing how different factors contribute to unfaithfulness, but our focus is establishing that the effect exists across various settings, for which we think averaging is sufficiently convincing without adding unnecessary complexity. We do use paired difference tests, for example for the confidence intervals on differences of proportions w.r.t. the effect of CoT in section 4.2 (see Appendix H). We will clarify in the main text, and add the confidence intervals for the BBH analysis too. > In BBQ ... I couldn’t determine if the few-shot prompt examples always included only stereotype-aligned responses for the disambiguated content examples Yes, the few-shot prompt examples do contain only stereotype-aligned examples for the disambiguated contexts. The zero-shot setting establishes that CoT is biased by social bias, and for few-shot our motivation was to explore a worst case for faithfulness with respect to social bias. > the authors write how some generated pieces of evidence were too strong and others were too weak, which makes me confused about what the ideal amount of ‘weak evidence’ is and what it’s intended to do. Weak evidence was manually written for each case in BBQ with the intent of providing something too weak for the LM to actually use it to make the decision, but strong/salient enough for the LM to latch on to in its explanation. Having more examples with evidence in this sweet spot affects the “% Unfaithfulness” metric, but not “% Unfaithfulness explained by bias”, which is the metric of primary interest. > the model could infer the task that is being asked of it is to jusifiy the highest probability outcome using the piece of ‘weak evidence’ provided. The "right" answer in BBQ is not important for us; what matters is the relationship between the biasing factor, model predictions, and the model explanations. For this and your other concerns about interpreting the experimental results, see “Definition of faithfulness” in the global response. ### Literature > Zelikman, Wu, Mu, & Goodman (2022) Good point and thank you for the reference! We will discuss this in the paper. ## Other comments > I wouldn’t say that performance increase alone suggests this. Fair; it’s not the performance increase alone, but also the quality of the CoT explanations (i.e., that they often also describe a correct process for arriving at the correct answer). We will fix this. > there is also a short discussion about how the models could have correct, unfaithful reasoning ... I guess the authors take “correct reasoning” to not entail that the conclusion follows from the reasoning? Or, is that also part of ‘correct reasoning’? In our view an explanation can be correct by virtue of being the actual reason the label is correct, while unfaithful by misrepresenting the model's reasoning process. > the authors write “measuring these effects gives us an account of what models actually rely on to make their predictions, …” I think this is overstated: You don’t know what the model relies upon in the nonbiasing cases. We will clarify that this is only a partial account. > I found the discussion around the relevance of subjectivity a little too terse and nonobvious ... I kept wondering why switching answers for subjective questions would be concerning, given that subjective questions could have multiple valid answers. For subjective tasks, sound reasoning may be possible for a number of different answers, but sufficiently complete explanations for different answers will require mutually incompatible assumptions or claims. If biases cause the model to contradict itself across explanations by steering the model to make different assumptions in different contexts, this is unfaithful. > Footnote 2 should be elevated to a top-line result Good point. We will do so. > CoT can steer models from correct initial predictions towards bias-consistent predictions: There is no statistical evidence in support of this claim. This follows from the fact that the performance of CoT in the biased context goes down. > In the BBQ results: The authors talk about “unfaithful explanations” even in the no CoT conditions, and this is generally confusing. Oops, we’ll fix this. ### Questions > From the start, the authors label the auto-regressive generations of LLMs under CoT prompting as “CoT explanations”. But is there any reason to call them explanations? To the extent that CoT explanations are correct, one may at least view them of explanations of why the answer is correct rather than just explanations of why the model made its decision. Barring that, our description of CoT explanations as “explanations” can be read as an aspirational label necessary for our main claim that they can be systematically unfaithful as explanations (insofar as they are explanations at all). > For the models evaluated, do we know whether CoT was used during their training process? No. > it looks like the “Suggested Answer” trick only really works for a very small subset of tasks, and even the “Answer is always A” trick works for about half the tasks. Suggested answer works for all tasks actually; see Table 9 and take the difference between the CoT Biased column and CoT Unbiased column. “Answer always A” does not work for every task, but there is a fair amount of variance for these metrics on a per-task basis. > In the appendix, the example shows 2 stereotype aligned responses for the disambiguated cases. Is that always the case? Yes. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks to the authors for the rebuttal, and apologies for the late reply. I'll just respond to particular set of points made in the rebuttal, which I think touch on crucial features of the design and interpretation of this study. The authors write: > Faithful explanations that are stylistically consistent with the few-shot demonstrations (i.e., which don’t mention the biasing feature) are entirely possible in all of our experiments, as long as the model doesn’t make predictions on the basis of the biasing features. The models could have done this and had faithful explanations. Our experiments show that the they don't. It is because they rely on the biasing features to make their predictions that they must verbalize them in order for the explanations to be faithful. From this, I understand that the model could have had faithful explanations had the model done the task right (i.e., not be influenced by the biasing features) because we know the model was never going to generate bias-consistent explanations (by virtue of them being implausible, given the examples of good explanations used in the prompts). This design decision makes the import of the project more about the task design than it is about the faithfulness of the explanation (i.e., the question of faithful explanations in your study comes down to the question of “does the model make the unbiased response in the biased context”). Of course, this is not unrelated to faithfulness, by the arguments the authors have spelled out. But I think it is a genuine limitation of the study worth acknowledging. For example, a more direct and probably stronger test of the hypothesis would have come with experiments where an explanation consistent with the biased predictions was not implausible, so you could see if the model ever produced a good explanation to a biased response (something a priori quite unlikely in this study). A small point, but I hope the authors can clarify the fact that the responses for the disambiguated cases for BBQ were always stereotype aligned. --- Reply to Comment 1.1.1: Title: Reply Comment: The reviewer makes a great point that the interpretation of the few-shot results is subtle. In this setting, the few-shot prompt makes it such that the most plausible way for the model to be faithful is to give unbiased reasoning and makes it less plausible that the model could give faithful explanations for biased responses. However, we think the zero-shot CoT setting that we included in our paper would be the most direct way to test exactly what the reviewer is suggesting. Giving a faithful explanation for a biased explanation is a priori plausible in our study under the zero-shot CoT setting since we are not applying stylistic constraints on the resulting generation. For example, the following faithful explanation for a biased answer seems a priori plausible to appear in a zero-shot CoT setting: > Let's think step by step: ... Thus, the best answer would appear to be B. However, the user has suggested that they think the correct answer is A. It's possible that the best answer could be A if the user interprets the question to be asking about X. Therefore the best answer is A. In this setting, we still see that models do not verbalize these biases. However, as the reviewer says it's still possible that the presence of high-quality explanations in pre-training or during supervised fine-tuning could affect the tendency for models to mention biases even in a zero-shot setting. This is precisely a reason to expect CoT to not be faithful by default, i.e. it's not just a priori unlikely that models verbalize biases in this study, this is a priori unlikely in general. Language models rely on a wide range of features to make predictions (including unintuitive ones), so it would be surprising if explanations seen during pre-training, supervised fine-tuning, or in few-shot prompting were well aligned with how models make predictions. In summary, we think that our zero-shot CoT setting is a reasonable baseline for a setting where faithful explanations for biased answers are more plausible. Given that models do not verbalize biases in this setting, this suggests that this should not be an issue for interpreting our experiments. However, if future experiments were to find that models do faithfully verbalize biases in the zero-shot CoT setting, this would affect the interpretation of few-shot CoT results and would be an important limitation to discuss. We will add discussion of this. We will also add that finding prompts or CoT demonstrations that make faithfully explaining the biasing features more plausible could be a good direction for future work. Thanks! > A small point, but I hope the authors can clarify the fact that the responses for the disambiguated cases for BBQ were always stereotype aligned. Thanks, we will add this.
Summary: When using LLMs for problem solving, decision-making, and general reasoning: many prompting and reasoning strategies rely on step-by-step reasoning and/or other explicit explanations and structuring of the reasoning process. This paper studies the degree to which decisions and classifications are faithful to the LLMs explicitly provided explanation of its reasoning process. I.e., if an LLM says it made a decision based on factor X, is there actually a different factor Y that drove the decision? In a series of experiments, this research demonstrates that there are other factors that bias the LLMs decision making process (and that are not included in CoT explanations), with implications for fairness/bias, overreliance, and safety/reliability concerns in the use of LLMs for decision making and planning. Strengths: The paper is well motivated, tackling an issue fundamental to reasoning across many scenarios: whether the verbalized reasoning procedure is faithful to the true reasoning procedure. The 3 experiments (social bias, few-shot bias, and suggested answer bias) convincingly demonstrate that the LLM's decision making is influenced by factors outside its verbalized reasoning. The implications Weaknesses: It's unclear whether the biases affect all classes of reasoning tasks equally, or if there are some scenarios or domains that are more strongly affected than others. Also, it is not clear from these experiments whether and how to generalize this beyond categorical decision making tasks (i.e., beyond multiple choice) to biases that might influence other generative AI tasks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Would potential memorization of benchmark datasets affect the interpretation of results? What, if any, are the implications of these experimental results for tasks that are more about generative rather than reasoning tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have addressed limitations. The potential negative societal impact could be broadened to include the potential for these methods to allow adversarial manipulation of LLM reasoning that would be difficult to detect by end users. E.g., these results could aid an adversary in manipulating a prompt or input data to the LLM to trigger a biased and incorrect result. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We’re glad that you found the experiments convincing and well-motivated. > It's unclear whether the biases affect all classes of reasoning tasks equally, or if there are some scenarios or domains that are more strongly affected than others. Also, it is not clear from these experiments whether and how to generalize this beyond categorical decision making tasks (i.e., beyond multiple choice) to biases that might influence other generative AI tasks. This is a good point and one that we think would be very important to investigate for future work. We included a range of reasoning tasks in case this effect was only limited to a particular type of task. We suspect that further model improvements will reduce susceptibility to biases on tasks with only one correct process and answer, e.g. mathematics tasks, since models will be less inclined to make errors. However, we suspect that models could continue to be susceptible to biased reasoning on more subjective tasks, since it’s possible to give biased but correct reasoning in these cases. For more elaboration on this, see the response to Reviewer mRxa. Evaluating on generative tasks would require extensions to our approach to measure whether biasing features are affecting model outputs and assess whether model outputs are consistent with their given explanations. It's also worth mentioning that multiple choice may be a more challenging setting to get biased reasoning than generative tasks, since the multiple choice options are designed to be mutually incompatible. So the bias needs to be strong enough to get the model to give a contradictory answer to what it would normally give. In contrast, answers to generative AI tasks can differ in more subtle ways. It may be easier for model reasoning and answers for generative AI tasks to be slightly distorted by bias. > Would potential memorization of benchmark datasets affect the interpretation of results? Great question. If anything, if these benchmarks were memorized by the models we tested that would make the conclusions even stronger: It would mean that the model’s reasoning generation process is so susceptible to bias that it can even override a memorized correct answer.
Rebuttal 1: Rebuttal: Thank you for the reviews! We’re encouraged that the reviewers agreed that our work is interesting, provocative, well-written and clear, found that our evaluations “convincingly demonstrate that the LLM's decision making is influenced by factors outside its verbalized reasoning” and thought that the paper is well-motivated and timely. In our global response, we address some points raised by multiple reviewers. ## Definition of faithfulness Reviewer y8DJ: > I also had issues on how faithfulness is defined/used in the paper ... Seems like this paper assumes that any wrong reasoning is unfaithful which I am having a hard time understanding why. Reviewer 6weX: > How do you define faithfulness? To me, those explanations supporting the new incorrect answers are actually faithful since they are consistent. According to the simulatability framework of faithfulness, which we follow (see Doshi-Velez and Kim, 2017, _Towards A Rigorous Science of Interpretable Machine Learning_), a faithful model explanation should help a human form a mental model of the process that the AI system is using to make a prediction. The reasoning provided by a model on a single example may be coherent and consistent with its prediction on that example (in which case we call it _plausible_), while being misleading about how the system will make predictions on other examples (in which case we call it _unfaithful_). In our experiments, the explanations supporting the new incorrect answers in biased contexts (or contexts with flipped evidence, for the BBQ examples) are inconsistent with the models’ behavior on the original examples, which is why we call them unfaithful. For example, the model may report inconsistent beliefs (e.g., about whether “shooting from the eighteen” is a common phrase in soccer; see Table 1) in order to justify answers aligned with an undisclosed bias. Unfaithful explanations can also be given for correct answers, but we focus on systematic unfaithfulness according to biasing features which cause undesirable (i.e., incorrect) behaviors, as these are easiest to measure and most concerning for model safety. Reviewer 6weX: > The paper's findings are not that surprising since there is no systematic way to guarantee LLMs to generate answers that are consistent with their CoT. It would be helpful to extend the study to program-aided LLMs, where the answers are the results of executing the programs generated by LLMs. As explained above, the unfaithfulness we demonstrate is not about whether the answers are consistent with their corresponding CoT, but whether the CoT explanations accurately reflect the process the model uses to make a decision (i.e., whether they explain model behavior on other examples). The problem is the process for generating the explanation (or program) itself can be systematically biased away from the explanation’s contents by undisclosed factors such as social biases. So using program-aided LMs would not fix the underlying issue. ## Is it realistic to expect faithful explanations? Reviewer mRxa: > In Study 1, for the “Always A”, in few-shot setting with chain-of-thought, the model could be learning that the task is to Respond A and provide a plausible explanation that doesn’t mention A ... it’s unclear if the model’s reasoning is unfaithful here, or whether the prompt is encouraging it to be unfaithful. Reviewer 6weX: > What would the desired CoT explanations look like if they are faithful? ... should LLMs generate a CoT like “Since all the answers to the examples in the prompt are all As, the answer to the new question should also be A”? I feel some desired “faithful” explanations that the authors are expecting might not be realistic. ... > LLMs might misinterpret the task as always trying to explain the answer A. The work also shows that indeed the explanations are changed to support the new incorrect answers, which are faithful. In this regard, the finding is against the main argument of the paper. Faithful explanations that are stylistically consistent with the few-shot demonstrations (i.e., which don’t mention the biasing feature) are entirely possible in all of our experiments, _as long as the model doesn’t make predictions on the basis of the biasing features_. The models could have done this and had faithful explanations. Our experiments show that the they don't. It is because they rely on the biasing features to make their predictions that they must verbalize them in order for the explanations to be faithful. On task underspecification: our tasks are not so underspecified. For BBH, we put task instructions at the beginning, and for BBQ, we prompt the model to avoid relying on biases. Even if underspecification is a contributor to unfaithfulness, it is still important: any few-shot prompt with a subtle bias may be interpreted mean “please provide a biased response and rationalize it with a plausible-sounding explanation.” If a model behaves in accordance with that interpretation, it produces unfaithful explanations, which is a problem if we are trying to use CoT for explainability. When explanations change to support new incorrect answers due to the existence of an undisclosed biasing feature, that makes the explanations unfaithful, not faithful (see “Definition of faithfulness”). ## Biased reasoning as an adversarial attack Reviewers mRxa and aL7J mention the potential for harm from adversarial attacks using methods based on our approach. If a model is making a decision on the basis of user input, a user inputting a biased prompt (e.g., using our 'Suggested Answer' method) could make system produce biased predictions without a trace of this bias in its CoT explanations. This could cause problems for model auditing or fairness methods if they rely on CoT explanations to detect faulty or unfair reasoning. Hopefully, publishing our results should encourage skepticism in the faithfulness of CoT explanations and help avoid some of these negative outcomes.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Federated Linear Bandits with Finite Adversarial Actions
Accept (poster)
Summary: This paper studies the linear contextual bandits problem with federated learning of $M$ clients communicating with a central sever. In particular, the paper assumes adversarial finite action, and considers two cases of the communication: asynchronous and synchronous. Following the idea of OFUL, this work extends the previous SupLinUCB in linear contextual bandits to deal with the federated learning scenario, and proposes the algoirthm FedSupLinUCB which achieves $\widetilde{O}(\sqrt{dT})$ where $T$ is the total number of pulls (the summation of cumulative pulls over $M$ clients together), and $d$ is the dimension. More importantly, this result is attained with limited communication cost $\mathcal{O}(\sqrt{d^3M^3}\log(d))$ for synchronous case, and $\mathcal{O}(dM^2\log(d)\log(T))$ for asynchronous case. In addition, the FedSupLinUCB further extends to the variance-adaptive and adversarial corruption scenarios. Strengths: The paper is clearly-written and well-organized. The notations are well-defined and the meanings are explained clearly prior to their usages. 1. Combining online learning and federated learning is an interesting direction. The proposed algorithm attains the optimal regret bound of such problem up to some $\log T$ factors, and further ensures limited communication cost, which could be dominating in the distributed systems. Among these results, the most interesting one is that for the synchronous case, which is independent with the horizon $T$. 2. The instance-dependent regret bounds for the variance-adaptive and adversarial corruption scenarios are also very interesting. The worst case regret bound is not meaningful in most real world applications. Weaknesses: This paper does not have any specific weakness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Any lower bound of the communication cost in the synchronous case or the asynchronous case? 2. Any idea to improve the regret bounds to the optimal regret bound $\mathcal{O}(\sqrt{dT\log T})$, which can be obtained by Fed-PE? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This work is pure theoretical, and does not have any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the clear summary and for finding our paper interesting. Please see our response below with respect to your specific comments. **Q1**: "Any lower bound of the communication cost in the synchronous case or the asynchronous case?" **Response**: There are some most recent works, which appeared after the submission of our manuscript, focusing on analyzing the communication cost complexity in federated bandit problems. E.g., in [R2], the communication cost is measured in bits in the federated linear bandits setting with a simple unit ball action. **Q2**: "Any idea to improve the regret bounds to the optimal regret bound $O(\sqrt{dT\log(T)})$, which can be obtained by Fed-PE?" **Response**: It is important to note that our study differs from Fed-PE, which primarily focuses on finite and fixed context sets. In contrast, we address a finite and time-evolving context setting, presenting new challenges. As such, the G-optimal design method utilized in Fed-PE is not applicable to our context. To the best of our knowledge, our regret is order-optimal in this scenario. [R2] Salgia, Sudeep, and Qing Zhao. "Distributed linear bandits under communication constraints." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your answer, Regarding Q1: it would be useful to add this concurrent results to the final version of this paper. Regarding Q2: thank you for the clarification. Could you elaborate on what makes you think that your regret is order optimal and how to derive that lower bound? --- Reply to Comment 1.1.1: Comment: Thank you for your message. We appreciate the reviewer's suggestions and will certainly incorporate the new communication results into the final version. In single-client linear bandits with finite adversarial actions, it has been shown that an agent incurs regret at least $\Omega(\sqrt{dT})$ [R3], where $d$ is the dimension of the unknown vector and $T$ is the total number of rounds. In the federated linear bandits setting of interests, there are $M$ clients and each runs with $T$ rounds. When communicating arbitrarily, it can be viewed as a single agent that runs a total of $MT$ rounds, which incurs at least $\Omega(\sqrt{dMT})$ regrets, which gives a lower bound for the federated linear bandits with finite actions. The proposed algorithms achieve, $\tilde{O}(\sqrt{d MT})$ by omitting logarithmic factors, and thus are order optimal. Note that the proposed algorithms achieve order optimality while maintaining small communication costs, which is the essence of federated learning. A more comprehensive analysis of order optimality and the benefit of the federated linear bandit framework can be found in Remark 2, which follows Theorem 5.1 in our paper. [R3] Chu, Wei, et al. "Contextual bandits with linear payoff functions." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 2011.
Summary: This paper studies a federated linear bandits model, where M clients communicate with a central server to solve a linear contextual bandits problem with finite adversarial action sets and proposes the FedSupLinUCB algorithm, which extends the SupLinUCB and OFUL principles in linear contextual bandits. Both asynchronous and synchronous cases are considered. Experiment results corroborate the theoretical analysis and demonstrate the effectiveness of FedSupLinUCB on both synthetic and real-world datasets. Strengths: The method description, theoretical derivation and complexity analysis of the article are very detailed and well organized. Weaknesses: See Questions part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do the adversarial corruption actios only exist in the setting of asynchronous case? 2. Are there differences in intensity for adversarial actions? If so, do different intensities have different effects on the experimental scene? 3. Is Robust Async-FedSupLinUCB a general robust method or can only be used against a specific adversarial setting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the interesting questions regarding the proposed FedSupLinUCB algorithm. **Q1**: "Do the adversarial corruption actions only exist in the setting of the asynchronous case?" **Response**: The asynchronous case contains the synchronous case as a special case. We thus designed the robust async-FedSupLinUCB algorithm that handles the adversarial corrupted actions in this general asynchronous case, which thus can be directly applied to the synchronous case. Note that in the study without corruption, we treat the synchronous setting separately and designed a more communication-efficient scheme without time horizon dependence. **Q2**: "Are there differences in intensity for adversarial actions? If so, do different intensities have different effects on the experimental scene?" **Response**: We measure the total intensity of the adversarial actions by $C_p$ as shown in Lines 281-283, and its impact on the regret is represented by an additional additive term of $d C_p$ in Theorem 8.1. Please correct us if your remark is misunderstood by us. We have made a general assumption on the context set, considering it to be both finite and adversarial. This assumption enhances the practicality of our algorithm, making it applicable to a wide range of real-world scenarios. **Q3**: "Is Robust Async-FedSupLinUCB a general robust method or can only be used against a specific adversarial setting?" **Response**: Robust Async-FedSupLinUCB is a general robust method that is capable of handling any additive corruption on the reward function, as long as the total corruption budget is constrained. --- Rebuttal Comment 1.1: Title: Response to author's rebuttal Comment: Thanks to the authors' response which has addressed my minor concerns, and I will keep my preliminary rating
Summary: In this work, the authors consider the problem of Federated linear bandits with finite adversarial actions. This is the first time investigating a setting where federated clients are faced with a set of actions to choose from that changes over time in an oblivious adversarial manner, and the authors of this paper do so for both synchronous and asynchronous versions of the problem. To handle this problem, the authors propose two algorithms that are based on SupLinUCB (Chen et al. 2011), which is a standard approach for linear bandits and adapted to federated bandits. Then, they combine this S-LUCB approach with two different protocols to tackle both the asynchronous and the synchronous version of the problem.One of the challenges of federated learning comes with the cost of the communication between the clients and the server. In the asynchronous setting, only one client is active at the time and the the synchronization rate depends on each individual client. For this problem, the authors prove that the proposed algorithm achieves an order optimal (up to log factors) high probability bound of order $O(\sqrt{dT})$ while ensuring that their total communication cost is logarithmic in T. In the synchronous case, layers of clients work simultaneously which allows for the server to take advantage of the various information provided by the clients. This means that these clients don't just synchronize when the information that they have has changed, but they also do so periodically to ensure that their information is up to date. This extra information provided by the server to the clients allows to actually lower the total number of communications and obtain a $\tilde O(\sqrt{dMT_c})$ regret bound while keeping a time independent bound on the number of communication. The authors provide detailed proofs of their results in the appendices and experiments, both on generated and real life datasets to highlight how the regret and the communication costs evolve in terms of arrival patterns and number of clients. Strengths: This works studies a new variant of the federated linear bandits problem where the clients are facing finite adversarial arm sets instead of either finite fixed armed sets or infinite arm sets. To do so, they build upon existing algorithms for linear bandits with an adversarial arm set, SupLinUCB (Chu et al. 2011) and for asyncronous federated learning FedLinUCB (He et al. 2022a). The novelty of their approach mainly resides in the improved communication protocol, in particular in the synchronous setting, where they take advantage of a layer structure to limit the communication cost while still synchronizing regularly. In both the asynchronous and the synchronous settings, they recover the regret of SupLinUCB in the single player setting (Chu et al. 2011), and are within a log T factor of the state of the art results for federated synchronous and asynchronous learning, both in terms of regret and communication costs. Overall, this is a well presented paper that provides some interesting results for a new variation of the federated learning problem, which might be more relevant in practice, which is reinforced by the fact that they also run experiments on real worlds datasets. Presenting results for the corrupted setting is also a nice generalization of their results. Weaknesses: In the experiments section, it would be interesting to see the proposed algorithms compared with other results for federated learning: The results in Appendix H are promising and could benefit from being extended. It would also be nice if the legends on the plot could be larger so it is a bit easier to read. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The problems studied in federated learning are very close to these of multiplayer multiarmed bandits. Have you thought of whether there are results that could bridge the two settings? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer liked our federated linear bandit model, as well as for providing a thoughtful summary of our paper. **Q1**: "In the experiments section, it would be interesting to see the proposed algorithms compared with other results for federated learning..." **Response**: We thank the reviewer for the suggestion. In the main paper, we proposed algorithms that provably achieve nearly minimax optimal regret while maintaining a small amount of communication cost. In the experiments section, we aim to have a better interpretation of the proposed algorithms by empirically studying the impact of the arrival pattern, the number of clients, and the trade-off between regrets and communication costs. In other words, gaining intuition and bringing the interpretation of the proposed algorithm is our goal in the setup of the experiments. To this end, we did not conduct the experiments for other existing algorithms, which as far as we know are not minimax optimal in our setting. **Q2**: "The problems studied in federated learning are very close to these of multiplayer multiarmed bandits. Have you thought of whether there are results that could bridge the two settings?" **Response**: A majority of the multiplayer multi-armed bandits settings focus on handling the collision caused by multiple players pulling the same arm, while in the federated bandits settings, the multiple players communicate with a central server to better solve their own local decision-making problem by the information acquired from other players. We thank the reviewer for the insightful observation regarding the existing research in the domain of multi-player and multi-armed bandit problems, we are not aware of any work combining the study of the communication costs and the collisions. --- Rebuttal Comment 1.1: Title: Please engage in the rebuttal Comment: Dear reviewer, Please acknowledge the author's response and tell us if the replies changed your assessment.
Summary: This paper addresses the linear bandits problem in the context of federated learning. It proposes a general algorithm called FedSupLinUCB that solves a linear contextual bandit problem with finite adversarial action sets that may vary across clients. The paper also considers practical challenges in federated settings such as synchrony, communication efficiency, and time-evolving scenarios. Strengths: This paper explores the linear contextual bandit problem with finite adversarial action sets in the federated learning setting. Additionally, it addresses several practical challenges in federated settings, such as synchrony, communication efficiency, and time-evolving scenarios. Weaknesses: Background and challenge: - The introduction and preliminaries do not provide a detailed definition of the time-evolving and adversarial arm. To improve comprehension, it would be helpful to explain the impact of these factors on the federated linear bandit problem. - Although the paper discusses the main challenges of federated linear bandits, it should emphasize the key differences from traditional linear bandit problems, such as single-player bandits or distributed bandits. - The paper lacks strong motivation due to the vague background and challenge. Novelty: - The main theoretical contributions of the paper extend previous work on linear bandit problems to the federated setting. For example, FedSumLinUCB is an extension of SupLinUCB and OFUL. Robust Async-FedSupLinUCB incorporates ideas from He et al. (2020b). - The theory and methodology employed in the paper appear to be borrowed from the literature. Experiment setting: - The reward function utilized in the experiments, where $r_{t, a}=\theta^{\top} x_{t, a}+\epsilon_t$ , does not align with the settings described in the paper. To my knowledge, it is more general to let the reward function different across clients with certain variances according to the environments. - The experiments conducted in the paper are limited. Only a classic algorithm proposed in 2011 by Chu et al. is used as a baseline, and not all proposed algorithms are fully evaluated. Writing: - It is unusual for a conference paper to have 10 sections. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - What are the specific challenges in the federated linear bandit problem? Federated learning was originally proposed to address data-sharing issues. Does the federated linear bandit problem face similar challenges related to system efficiency and heterogeneity? - What unique solutions or methods does the paper contribute? It appears that some challenges have already been addressed by previous works. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No limitations are discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the interesting questions regarding the proposed FedSupLinUCB algorithm. **Q1**: "The introduction and preliminaries do not provide a detailed definition of the time-evolving and adversarial arm. To improve comprehension, it would be helpful to explain the impact of these factors on the federated linear bandit problem." **Response**: We appreciate the reviewer's attention to the setting of the contexts set. In Section 3.1 (line 110), we have provided a clear definition of the time-evolving context sets, indicating that the context sets change over time and are different among clients. The inclusion of both finite and adversarial context sets in our problem formulation presents a more challenging scenario compared to the Fed-PE algorithm, which specifically handles finite and fixed contexts. For that situation, techniques like G-optimal design are applicable, though do not apply in our formulation. To address the challenges posed by time-evolving contexts, we have introduced the FedSupLinUCB framework. We will highlight the distinctiveness when presenting the results and comparisons. **Q2**: "Although the paper discusses the main challenges of federated linear bandits, it should emphasize the key differences from traditional linear bandit problems, such as single-player bandits or distributed bandits." **Response**: In our study, we concentrate on a federated linear bandit model. In comparison to single-player bandit and distributed bandit approaches, we leverage the benefit of data sharing through communication between the clients and the central server. Our main focus is on achieving optimal regret while keeping communication costs low. Specifically, in our model, we consider a star-shaped structure, where M clients interact with a central server to expedite the decision-making process. Additionally, we highlight the benefits of the federated linear bandit framework in the remarks following the theorem in our paper. It's worth noting that there does not appear to be a clean separation between "distributed bandits" and "federated bandits". Nevertheless, we contend that the "federated bandits" setting enables a more comprehensive modeling of the communication and computation, and can be viewed as focusing more on the impact of different communication and computation modes, which is indeed the perspective we take in this work. **Q3**: "The paper lacks strong motivation due to the vague background and challenge." **Response**: Linear bandits with adversarial and finite actions find numerous applications, including recommendation systems [R1]. The distributed nature of these applications naturally aligns with the setting we have studied in this paper, as we have also mentioned in the introduction section of our paper. **Q4**: "The main theoretical contributions of the paper extend previous work on linear bandit problems to the federated setting...The theory and methodology employed in the paper appear to be borrowed from the literature." **Response**: Similar to most of the previous works on federated bandits, whose technical aspects resemble the counterparts of single-player bandits, we also utilize techniques in single-player bandits as analysis tools for this specific federated linear bandit problem. However, we believe that the study of the federated bandit problem is important and valuable, particularly due to the recent trend of invoking more edge computing resources. To address the specific challenges in the federated setting, we proposed the FedSupLinUCB framework, where the Async-FedSupLinUCB and Sync-FedSupLinUCB both achieve near-optimal regrets while significantly reducing communication costs. **Q5**: "The reward function utilized in the experiments, where $r_{t,a_t} = \theta^{\top} x_{t,a_t}^i + \epsilon_t$, does not align with the settings described in the paper. The experiments conducted in the paper are limited..." **Response**: We conducted our experiment to elaborate on the theoretical analysis and indeed aligned it with the problem formulation presented in Section 3.1. We made the assumption that all clients share the same underlying parameter $\theta$. This assumption allows us to leverage the sharing of information among clients, thereby benefiting the decision-making process of each individual client, and this is the fundamental reason that federated learning is able to gain performance improvement over learning by individual agents. Considering our assumptions regarding finite and adversarial context sets, we conduct a comparison with the order-optimal benchmark, SupLinUCB, to showcase the advantages of our federated framework. **Q6**: "What are the specific challenges in the federated linear bandit problem? Federated learning was originally proposed to address data-sharing issues. Does the federated linear bandit problem face similar challenges related to system efficiency and heterogeneity? What unique solutions or methods does the paper contribute? It appears that some challenges have already been addressed by previous works." **Response**: The major technical challenges (also for almost all the federated bandits works) are to incorporate the techniques in the single-player algorithms into the federated scenario while explicitly considering the impact of communication cost. Yet the focus on the communication cost under different update and computation modes led us to the proposed new algorithms in this work. We believe the study of such federated contextual bandits problem is important and valuable, and the proposed algorithms and their analysis advance our understanding and provide insights into the problem in terms of maintaining the near-optimal regret while significantly mitigating the communication costs. [R1] Ruan, Yufei, Jiaqi Yang, and Yuan Zhou. ”Linear bandits with limited adaptivity and learning distributional optimal design.” Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing. 2021. --- Rebuttal Comment 1.1: Title: Please engage in the rebuttal Comment: Dear reviewer, Please at least acknowledge the author's response and ideally explain if and why their reply does not change your mind. --- Rebuttal Comment 1.2: Title: Thanks for the feedback Comment: Thank you for addressing my concerns. However, I remain uncertain, especially in terms of the core challenges in FL like client data heterogeneity, the distinct difference between distributed bandit and federated bandit, and the paper's singular contributions beyond the amalgamation of techniques from two distinct domains.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Natural Language Instruction-following with Task-related Language Development and Translation
Accept (poster)
Summary: The main contribution of the work is TALAR, a method for learning a vector representation of the instruction via a referential game. In TALAR, a generator creates a task vector from (state, next-state) pairs, which is then back-translated into natural language by the receiver. A translation from natural language to task vectors is then learned. At policy learning time, the instruction gets translated into task language and fed to the policy. The results show that TALAR outperforms other methods of learning task embeddings. Strengths: Allowing a task representation to emerge in a referential game, and then using the representation in policy learning is original. The experiments are standard in the RL community and show TALAR's superior performance. The finding that a more linearly separable task representation leads to better instruction following performance is significant. Weaknesses: My main concern with the work is that I don’t understand why the proposed technique would be better than parsing the instruction from humans into a structured representation that’s formulated ahead of time. What is the utility of having the discrete representation of a task emerge, rather than defining a desired discrete representation to begin with? GLA and the BERT methods make more sense to me, because the embedding methods are fairly generic . It seems to me that you could choose the predicate representation vector ahead of time, instead of playing the referential game, and then do the same translation step as before. Handcrafting another structured representation baseline seems like it would elucidate whether the referential game is necessary. One could do the parsing programmatically, or using an LLM like GPT-3. Furthermore, what happens when the natural language is OOD in task language? The method doesn’t seem set up to handle distributional shifts, which seems likely because, as the authors say, “natural language is a complex and unbounded representation” (line 29). Because the task representation is both discrete and emergent, is the representation manifold going to be well-behaved for OOD natural language? I also had some difficulty understanding certain paragraphs in the methods section. See questions. Comments: - I recommend including “predicate representation in the title. - Line 4: I wouldn’t include the terms “outside-in” and “inside-out” in the abstract, because you don’t define them until the intro and the meanings are not immediately intuitive. - Line 192: Typo “The We” - Line 194: This paragraph is hard to understand. A receiver diagram would be nice. My score reflects the fact that after reading the paper, I’m unconvinced on the utility of the core technique in the paper. Happy to discuss this with the authors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 197 and Line 203. Are these two sentences contradictory? You say you’re predicting the word corresponding to the state, but the word is chosen randomly? - Figure 2: is the number of predicate networks linear in the number of objects in the state? How did you choose the number of predicate and argument networks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The discussion is adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and valuable comments. We have taken every question into consideration and revised our paper to fix the typos. Please find the response below. > Q1: Why the proposed technique would be better than parsing the instruction from humans into a structured representation that’s formulated ahead of time. A1: Good point. we acknowledge that a manually designed structured representation could potentially be an effective method for interpreting natural language instructions. However, it is important to highlight the advantages of our approach over this traditional method. - Universality: The primary advantage of our method, TALAR, is its universality. Converting human instructions into a structured representation typically necessitates a unique design for each task, a process that can be both labor-intensive and limited in its ability to manage complex instructions. In contrast, TALAR automatically learns to develop TL and translates NL to TL, thereby eliminating the need for manual parsing of natural language instruction representation. This learning process is both automatic and universal. - Scalability: TALAR's task language is designed to adapt and scale in accordance with the complexity of the environment and the tasks that agents are required to perform. This feature proves particularly beneficial in scenarios involving complex tasks with numerous objects and operations, where creating a structured representation for natural language instructions can be challenging. In contrast, our method learns task language in an unsupervised manner through engagement in a referential game with the receiver. This learning framework can be easily applied to various NLC-RL tasks. > Q2: Handcrafting another structured representation baseline seems like it would elucidate whether the referential game is necessary. A2: We conduct a supplementary experiment to compare TALAR with a manually crafted structural representation on the FrankaKitchen task. To implement the new baseline, the NL instructions are parsed into a one-hot vector which indicates the current goal-configuration, serving as the structured representation. Subsequently, we implement a translation process same to that of TALAR. All other experimental setting remains consistent with those outlined in our original study. The results of this experiment are presented in the table below, where ‘Handcrafted’ denotes the new baseline trained based on the handcrafted representation. | | TALAR | Handcrafted | | --- | --- | --- | | Success rate | 93.5±8.8 | 94.1±8.1 | The results show that the performance of TALAR is comparable to the handcrafted representation in term of learning speed and the final score (Figure 6). This experiment further justify the effectiveness and conciseness of the task language learned by TALAR. > Q2: Furthermore, what happens when the natural language is OOD in task language? A2: At present TALAR handles with OOD natural language based on the generalization ability of translator. The translator's language model is trained on extensive corpus data, thereby demonstrating a certain degree of generalization ability. Preliminary experiments on unseen natural language instructions, although potentially not sufficiently OOD, provide initial evidence of this generalization ability. Simultaneously, we recognize that managing OOD datasets is a valuable area of exploration. A direct yet potentially effective strategy could involve designing an OOD detection module and processing the OOD natural language instructions differently. For instance, we could consider updating the translation generator or translator module online. > Q3: I also had some difficulty understanding certain paragraphs in the methods section. A3: Apologize for the method description not being clear enough. Here we would like to take this opportunity to clarify the particular details of our method. **Comment1**: Figure 2: is the number of predicate networks linear in the number of objects in the state? How did you choose the number of predicate and argument networks? Response1: The number of predicate networks is not necessarily linear to the number of objects in the state. It serves as a representation of the potential relationships that our learning algorithm might identify within a specific task. These relationships can often be abstract and anonymized, which makes their exact quantification a complex endeavor. Thus, the selection of the number of predicate networks is mainly influenced by the scale of the task, which includes factors such as the complexity of the instructions, the number of objects. In our experiment, we opted for a moderate number of 4 for both the FrankaKitchen and CLEVR-Robot tasks. Our ablation study on the number of predicate networks (Figure 7 in the original paper) show that TALAR performs consistently well across different selections of predicate network numbers. **Comment2**: Are two sentences at line 197 and line 203 contradictory? Response2: Sorry for the unclear presentation. The sentence on line 197 provides a high-level overview of the receiver's training objective, while the sentence on line 203 delves into the specifics of our implementation, and they are not contradictory. In the context of the $(s,s’,L_N)$ tuple, we initially select a word $T_i$ at random from the $L_N$ sentence. Subsequently, the receiver's goal is to predict this selected word, taking into account the task language (produced by the TL generator) and the previous words (i.e., $T_{1,\cdots,i-1}$). This process ensures that the TL generator is important to generate effective task language, as its effectiveness directly influence the receiver's prediction accuracy. **Comment3**: This paragraph is hard to understand. A receiver diagram would be nice. Response3: We have incorporated a receiver diagram to enhance the clarity of the discussed concept, which is depicted in Figure 7 in the rebuttal attached PDF file. --- Rebuttal Comment 1.1: Comment: Thank you for A1 and A2. These answers get at my main concerns of TALAR's usability, and I have increased my score. I have reservations as to whether this solution is better than a semantic parsing solution, especially since semantic parsing is so good nowadays with LLMs, so I do not increase my score more. Nevertheless, after reading the rebuttal I think the work is technically correct. --- Reply to Comment 1.1.1: Title: Response to follow-up comments Comment: Thank you for your follow-up comments. We appreciate your feedback and are pleased to note that our response has addressed your main concerns. ------ Regarding your follow-up comment: > Whether this solution is better than a semantic parsing solution, especially since semantic parsing is so good nowadays with LLMs. We acknowledge your concern and would like to discuss the related studies on semantic parsing using LLMs. While it is true that LLMs can generate the semantic representation of a task, prior research [1] has underscored several challenges associated with LLM-based methods. These include (1) the complexity of semantic decomposition, (2) the necessity for specific prompt design to generate semantic representation, and (3) the potential for the knowledge required for translation to exceed the capacity of a single prompt. As a contrast, our method provides an effective way to automatically generate task representation. Moreover, our method holds a distinct advantage in terms of model size: Efficient semantic parsing using LLMs may require language models with an extensive number of parameters, and there is a significant performance gap (as demonstrated in paper [3]) between models with 300+M parameters and those with 10+B parameters. To employ LLM for task-specific semantic parsing, [1] utilizes code-davinci-002 and [2] employs codex as the LLM for parsing. All these LLMs possess billions of parameters. In contrast, our method, which employs a BERT model (110M parameters) to encode natural language sentences, offers a more efficient parsing solution. ------ Thanks again for your time and effort in providing feedback. We are happy to discuss if you had any additional concerns. ### Reference: [1] Andrew Drozdov, et al. Compositional Semantic Parsing With Large Language Models. ICLR 2023. [2] Zhoujun Cheng, et al. Binding Language Models in Symbolic Languages. ICLR 2023. [3] Erik Nijkamp, et al. Codegen: An Open Large Language Model for Code With Multi-Turn Program Synthesis. ICLR 2023.
Summary: This paper proposes the framework of TALAR, an Inside-Out learning framework for training policies that follows language instructions with reinforcement learning. The method leverages predicate representation for building a compact space of task language, and learns the generator of the task language from the game state by reconstructing natural language. A translator from natural language to task language is learned by language translation loss, and a policy based on task language is learned by reinforcement learning. In experiment, TALAR shows better performance than baseline methods that use natural language embeddings on two continuous control environments. Strengths: This paper has overall great presentations of the motivation, methods, and experiment. TALAR has the clear motivation of learning the task-correlated language and optimizing a policy based on task language, instead of using generally pre-trained language embeddings by language models. The experiments in section 5.2 show that TALAR learns better embedding space compared to BERT pre-trained models. Weaknesses: Some baseline methods are not compared: method using the same network architecture or number of parameters as TALAR but with no Task Language translation loss, trained with behavior cloning and reinforcement learning. This will alleviate the concerns that the performance improvement is caused by additional network capacity introduced by the autoregressive translation, while the baseline methods only have a very small number of parameters being tuned (the FC layer after BERT, if it is the case). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In figure 2(b), it seems that there is no mechanism for selecting different arguments for different predicates. Do all the predicates have the same set of arguments? If so, since the predicate is a Boolean variable, it is reasonable to suspect that that the multiple predicates cannot convey rich information in the arguments. In line 222, it is not precisely described how multiple predicates are concatenated to form the task language. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are well addressed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time and effort in reviewing our paper and providing valuable feedback. We are glad that you found our presentation, soundness, and the TALAR method to be strong. We would like to address your concerns and questions as follows. > Q1: Some baseline methods are not compared: method using the same network architecture or number of parameters as TALAR but with no Task Language translation loss. > A1: We have incorporated a comparison with these baselines, which share the same network architecture as TALAR. More specifically, we have added new BERT-based baselines on FrankaKitchen task, which employ the same architecture as the TL generator in TALAR, adhering to the BERT model. Other experimental settings remain same with the experiments in the paper. These new baselines are denoted with the prefix 'Aligned', and their respective experimental results are presented in the following table: | | TALAR | Bert-binary | Aligned-Bert-binary | Pretrained-Bert-binary | Aligned-Pretrained-Bert-binary | | --- | --- | --- | --- | --- | --- | | Training | **93.5±8.8** | 22.3±6.6 | 45.8±5.6 | 10.8±4.1 | 23.3±7.0 | | Testing | **88.3±7.1** | 22.1±4.8 | 45.2±6.3 | 11.1±3.0 | 24.1±6.8 | The performance of the baseline improves with the implementation of a new network architecture. However, it remains inferior to the TALAR approach in terms of learning efficiency and convergence scores. This highlights the effectiveness of the TALAR method. Please refer to Figure 4 in the rebuttal attached PDF file for the training curves. We appreciate your suggestions about the network architecture of baselines, and would add the new experiment results to the revised version. > Q2: In figure 2(b), it seems that there is no mechanism for selecting different arguments for different predicates. Do all the predicates have the same set of arguments? > A2: At present, we have implemented the TL generator such that all predicate networks utilize a shared argument list. This approach proves to be both sufficient and effective when the task language can be represented using a limited number of arguments. However, the TALAR system is inherently flexible and can be readily extended to accommodate more complex tasks. For instance, we can enable each predicate network to input a unique argument list by constructing additional, separate argument networks. > Q3: In line 222, it is not precisely described how multiple predicates are concatenated to form the task language. A3: We apologize for unclear presentation in the original paper regarding these points. The resulting task language is represented as $[Pred_1,\cdots,Pred_{N_{pn}}, arg_1, \cdots, arg_{N_a}]$, where $\text{Pred}_{i}$ denotes the output of the i-th predicate network. To enhance clarity, we will incorporate the explanation to Section 4.2.1. We hope that our response has addressed your concern and questions satisfactorily. If you had any further concerns, we are glad for discussion. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks to the authors for the detailed response. I appreciate the additional results and clarifications from the authors. --- Reply to Comment 1.1.1: Title: Thanks for your follow-up comments Comment: Thanks for your positive feedback and follow-up comments. We are pleased to note that the additional results and clarifications were useful in addressing your concerns.
Summary: This paper presents an algorithm that reduces the policy learning burden in a natural language-conditioned reinforcement learning framework. The authors investigate an inside-out scheme for natural language-conditioned RL and then present a new approach, TALAR, that learns multiple predicates to model object relationships as the task language. In the experiments, the authors demonstrated that TALAR outperforms previous baseline algorithms with instruction following policy. Strengths: - This proposed method helps policy learning by learning representations so that natural language instructions are close to semantically similar ones. - It shows much better performance than baseline algorithms in various domains. Weaknesses: - A separate training dataset is required for TALAR learning, and the cost for this seems expensive. - NL instructions must be defined separately for each task and data must be collected. - Performance is sensitive depending on the type of NL instructions collected for training. - It is difficult to create a training dataset including NL instructions without prior knowledge of the domain. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How to construct $s$ and $s’$ of $(s,s’,L_N)$ in task dataset $D$? As mentioned in lines 283-285, when composing $(s, s’, L_N)$, all $s$ from timestep 0 to $T-1$ are used, but doesn’t most of them correspond only to the partial part of $L_N$, not exactly $L_N$? - It seems that annotation for predicate representation is also necessary for referential game learning, is that correct? If so, isn't the amount of information required different from other baseline algorithms? It would be better if the annotations required for each algorithm were specified. - In lines 256-258, it is mentioned that the NL instruction was created using ChatGPT. How did you create it using it specifically? It would be better to provide a more detailed explanation with an example. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors mentioned limitations in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful comments and constructive feedback on our paper. We are grateful for the time and effort you put into reviewing our work. Below we address each of your concerns and questions. > Q1: A separate training dataset is required for TALAR learning, and the cost for this seems expensive. > A1: We appreciate your concern regarding the potential expense associated with the need for a separate training dataset for TALAR learning. We would like to clarify that the dataset is necessary for connecting the RL agent with the natural language. While some baseline methods do not explicitly use a dataset containing natural language, the language model they depend on also require amounts of corpus data to be trained. Besides, the acquisition of this data does not inherently imply a high cost, as the dataset for TALAR learning does not necessarily have to be generated from scratch. Existing datasets can be effectively repurposed or augmented to meet our requirements. For instance, pre-collected datasets have been extensively utilized in the realms of offline RL, imitation learning, and language model training. We recognize that the size of task dataset is a significant factor to consider. Thus we have conducted additional experiments to investigate the impact of task sample number on the performance of the algorithm. The results are presented in the Figure 3 in the rebuttal PDF file. The results demonstrate that 10,000 samples is sufficient to train a policy that achieves a success rate of 76%+, which clearly outperforms the performance of other baseline methods. These experimental results suggest that TALAR can effectively train a robust policy even with a limited number of samples in the task dataset. > Q2: NL instructions must be defined separately for each task; It is difficult to create a training dataset including NL instructions without prior knowledge of the domain; Performance is sensitive depending on the type of NL instructions collected for training. > A2: We would like to clarify that these instructions are not 'defined' in any rigid sense. In practical, the NL instructions in the task dataset could be flexibly assigned based on the linguistic habits of the person providing the state transition descriptions. This means that we do not require specific design of NL instructions to align with real-world application scenarios. To illustrate, while constructing the task dataset, one could generate NL instructions by observing a video of a robotic manipulation (i.e., the trajectory) and then describing the robot's execution goal in their own words. This process does not necessitate prior knowledge of the task domain. For example, we use ChatGPT to generate the NL instructions to construct the task dataset. Our experiments in Figure 4 in the original paper have shown that TALAR is robust to unseen NL instructions (please refer to Appendix D.1.1 for examples of training/testing NL instructions). We believe the experimental results underscore the robustness of the proposed approach. > Q3: How to construct $s$ and $s’$ of $(s,s’,L_N)$ in task dataset $D$? > A3: In practical, we could pre-collect the trajectory data $\{s_0,s_1,\cdots,s_T\}_i$, and have a human observer describe the instruction for each trajectory in natural language. Ideally, we would use $(s_0, s_T)$ to construct the task dataset. However, this could potentially make the data collection process both costly and labor-intensive. To mitigate this issue, we utilize a data augmentation strategy. We take the intermediate state in the trajectory as $s$, and the terminal state as $s’$. This approach treats NL instructions similarly to the goal in a goal-conditioned RL setting, where all states in the trajectory share the same goal. Consequently, it is sufficient for the agent to determine its action based on the natural language instruction and current state. We believe this data augmentation strategy not only enhances the efficiency of data collection but also enriches the diversity of the task dataset, thereby improving the robustness and generalizability of our model. > Q4: It seems that annotation for predicate representation is also necessary for referential game learning. > A4: Good point. While we acknowledge that the annotation for predicate representation might potentially accelerate the learning of task language, these annotations are not necessary. There are two primary considerations. Firstly, the central objective of our research is not to match the manually designed NL representations for each NLC-RL task. Instead, our focus is on the automatic discovery of task-specific relationships, which we refer to as task language. This approach aligns with our broader aim of fostering a more autonomous and adaptable system. Secondly, providing predicate representation could be laborious and costly. In such scenarios, the benefits of our approach become particularly evident. Our system learns predicate networks in an unsupervised manner through engagement in a referential game with the receiver. > Q5: How did you use ChatGPT specifically? > A5: The specific prompt we use is as follows: "Suppose you want to order a home service robot to [task description]. Give me 100 kinds of diverse expressions. Note that you don't need emphasize you are talking with a robot and don't use word ‘robot’." Here, [task description] is a placeholder for the specific task, such as 'open the microwave door'. We hope that these responses can address your concerns and questions. If you had any further concerns, please let us know. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for taking your time to respond to my review. I read the authors' rebuttal and other reviews. Since the authors have addressed most of my questions and concerns, I would like to raise my score. I thank the authors for their response. --- Reply to Comment 1.1.1: Comment: We are encouraged that our response has addressed most of your questions and concerns. We appreciate the time and effort you have invested in providing insightful feedback.
Summary: This work tackles the problem of learning an effective natural language (NL) instruction-following agent using RL algorithm as a goal-conditioned RL setup. The authors propose to learn a task-language (TL), which is a synthetic and vectorized representation containing the abstractive meaning of the instruction, via a referential-game-like manner. A TL generator alongsidse an NL-to-TL translator is learned for deriving an expressive and concise representation for an RL algorithm to effectively utilize. The experiments conducted on simulated environment demonstrate superior performance compared to several instruction-following baselines, as well as better distributed latent representations for the goal-conditioned RL policy. Strengths: - It is an interesting idea to position referential game in developing the TL, and the efficacy of such a language is the key to effective communication during the guided RL. - The proposed framework is well-modularized and component-wise it can be interpretable to certain extent for model analysis. - The paper presentation is easy to follow, with nicely illustrated visualizations. - Experiments are solid, and the supplementary materials are helpful. Weaknesses: - Could you elaborate the rationale behind learning the TL independent to a translator? (I’m guessing better modularization or something similar.) The TL developed this way is not expected to generalize very well across many tasks, and sacrifices the expressiveness of NL to a specifically trained translator. I.e., why not learn the translator (and the TL) in an end-to-end fashion (such as VQ-VAE paradigm or methods like discrete code-book look up), and maybe compare its pros and cons to the proposed framework? (The training can utilize bascially the same settings as the proposed framework in this work.) - Following above, if my understanding is correct, the objective of the “receiver” is something like next-word prediction or MLM in BERT. In this sense, the L_T operates similar to a prefix. How would you ensure it is indeed conditioned on the prefix for the word prediction? It is intuitive to think that the model will anyway predict the word correctly if trained well under standard language modeling objective and/or MLM. - A discussion in comparisons with recent robotic works that utilize LLM as a strong resource/planner is needed, such as [1]. - A set of non-frozen LM baselines could be conducted to even further emphasize on the needs of the conciseness of the generated TL. [1] Ahn, Michael, et al. "Do as i can, not as i say: Grounding language in robotic affordances." arXiv 2022. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How does it compare to program synthesis lines of research? PL can be an expressive and logical (procedural), which contain deterministic execution routes. And PL can be more expressive than the utilized predicate representation for more complex and long horizon tasks. I think a discussion could be nice. [2] [3] - Referring to the corresponding weakness, how would authors envision/suggest future works to apply similar (and/or extended) paradigm to more complex instruction following tasks, such as navigation, robot manipulation, etc., that are guided with much more sophisticated language? - Is L_T^tilt in Section 4.2.2 discrete or continous? - Better to include the prompting scheme for ChatGPT instruction generation (basically, paraphrases). [2] Sun, Shao-Hua, et al. "Neural program synthesis from diverse demonstration videos." ICML-18. [3] Sun, Shao-Hua, Te-Lin Wu, and Joseph J. Lim. "Program guided agent." ICLR-19. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: - Please refer to the weaknesses for the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper and providing constructive comments. We hope that our response has addressed your concerns, but if we missed anything please let us know. > Q1: Could you elaborate the rationale behind learning the TL independent to a translator? > A1: The main reason we separate TL development and translation is we can handle and expand these two modules independently. For example, developing TL during the RL training phase. Besides, we consider the practical aspects of the training procedure. Training these two modules concurrently could potentially lead to a trivial solution (e.g., both the generator and translator outputting zero to achieve a low translation loss). Furthermore, the back-propagation of loss from the translator could potentially interfere with the effective development of the TL. To better illustrate, we conduct additional experiment that trains TL generator and translator jointly on FrankaKitchen task, following the original experiment setting, as shown in the Figure 1 in the rebuttal PDF file. The experiment results indicate that training the two modules independently is more effective in training a policy. > Q2: The objective of the “receiver” is something like next-word prediction or MLM in BERT. How would you ensure it is indeed conditioned on the prefix for the word prediction? A2: Yes, the training objective of receiver has similar form like MLM loss in BERT, while it aims at predicting the next token based on the task language and previous tokens. In some cases, the tokens cannot be accurately predicted without the prefix. For example, there could be similar natural language instructions like “Open the refrigerator door” and “Open the microwave door”. The generated $L_{\rm T}$ must capture the key information in the state pair to assist model in correctly predicting ‘refrigerator’ token. > Q3: A discussion in comparisons with recent robotic works that utilize LLM as a strong resource/planner is needed. A3: We would like to discuss related works about language-based robotics control [1,2,3]. SayCan [1] combines low-level tasks with LLMs so that the language model provides high-level knowledge about the procedures for performing complex and temporally extended instructions. Inner Monologue [2] makes further improvements by adding the eponymous “inner monologue", which is implemented as injected feedback from the environment. ReAct [3] introduces LLMs reasoning to help the model induce, track, and update action plans as well as handle exceptions. Overall, these works design methods that generate high-level language instructions, while our work provides efficient ways to learning to complete the instructions. > Q4: A set of non-frozen LM baselines could be conducted. A4: We conducted additional experiments where the parameters of the BERT model in baseline methods are updated during the training process, as depicted in Figure 2 in the rebuttal PDF file. The results indicate that when the parameters of BERT are optimized, the baseline methods struggle to achieve successful task completion. We hypothesize that this outcome may be attributed to the extensive parameter count of the BERT model, which potentially increases the complexity of the learning process. > Q5: How does it compare to program synthesis lines of research? A5: That’s a very interesting question and we think that there are two key differences: 1. the expressiveness and complexity of the target domain-specific language (DSL). Program synthesis often deals with general-purpose programming languages, such as Python, which have rich syntax and semantics. TALAR uses a simpler and more restricted DSL. The advantage of using a simpler DSL is that it can be easier to learn and understand by the agent and achieves high learning efficiency. A potential limitation is that it may not fully encapsulate all the subtleties and variations inherent in natural language instructions. 2. the way the natural language instructions are translated into the DSL. Program synthesis typically depends on semantic parsing techniques, which necessitates extensive linguistic knowledge and domain expertise, and it can be challenging to manage ambiguity, noise, or incompleteness in natural language. In contrast, TALAR employs a neural network-based translator to map natural language to the task language, thereby eliminating the need for prior knowledge about the task. > Q6: How would authors envision/suggest future works to apply similar (and/or extended) paradigm to more complex instruction following tasks? A6: One possible direction is to develop a more expressive and flexible task language. Currently TALAR only employs the format of predicate representation as the task language. How to improve the inherent property of the learned task language is a key problem to be solved. For example, we could add some constraints to the learning process to ensure the TL generator could learn symmetric predicate, which makes it more meaningful. > Q7: Is $\tilde{L_{T}}$ in Section 4.2.2 discrete or continuous? A7: $\tilde{L_{T}}$ outputted by translator is **discrete**. > Q8: the prompting scheme for ChatGPT instruction generation. A8: The specific prompt we use is as follows: “Suppose you want to order a home service robot to [task description]. Give me 100 kinds of diverse expressions. Note that you don't need emphasize you are talking with a robot and don't use word ‘robot'.” Here, [task description] is a placeholder for the specific task, such as 'open the microwave door'. ### References: [1] Ahn, Michael, et al. Do as i can, not as i say: Grounding language in robotic affordances. [2] Wenlong Huang, et al. Inner monologue: Embodied reasoning through planning with language models. [3] Shunyu Yao, et al. ReAct: Synergizing Reasoning and Acting in Language Models. [4] Shao-Hua Sun, et al. Neural program synthesis from diverse demonstration videos. [5] Shao-Hua Sun, et al. Program guided agent. --- Rebuttal Comment 1.1: Comment: Thanks for the responses, majority of them are answered. For Q2, the reason might because the instructions are relatively short, so those two tools have almost equal probability to be predicted. I agree with this case. However, it could be just more beneficial to learn to model those key entities instead of standard MLM and hoping the models may hit them frequently by chance. Q3, I'm not sure if I'd agree, SayCan also needs to execute the generated plan. What I would argue for this work is perhaps that the TL is more straightforward (empirically though) to the goal-conditioned learning module. But this could be true as well to those goal-driven RL using just some simple language. In the extreme case, I'd even argue that generating the exact ROS programs are just better but an intermediate symbolic representation like this one could be a nice alternative. Q4, that is an interesting observation. Although it may be beyond the scope of this work, more research should be done on incorporating model update for goal-driven RL using LMs. Q5, I think as long as your target language has a deterministic representation and a compiler that execute it (deterministically), they are programs (doesn't have to be real-world programming language). So, in a sense, TALAR is also doing something similar to program synthesis, but not in the actual DSL token domain. I do like the point on eliminating the domain expertise for target PL but I don't think that's entirely the case here, as even knowing there has to be a predicate and entity -- is a domain knowledge. While it doesn't diminish the contributions of this work, what I expected was more like discussing how this could potentially benefit (or benefit from) the synthesis community and ease some domain overhead. --- Reply to Comment 1.1.1: Title: Response to follow-up questions Comment: Thanks very much for your valuable response. We response to the follow-up questions as follow. ------ > 1. It could be just more beneficial to learn to model those key entities instead of standard MLM. Thanks for your suggestions and we agree with this point. We propose TALAR as one specific implementation of our proposed IOL framework, and we are willing to investigate the potentially more effective implementation. A preliminary idea is, we might develop TL during the RL phase, and the learning objective of referential game becomes the successful task completion, learning latent representation of trajectory, or something else. > 2. Discussions about SayCan and robotics works utilizing LLMs. We appreciate your comments on the lines for robotics works using LLMs as planner. We will incorporate the discussions on these works into Section 2 (Related Work). > 3. more research should be done on incorporating model update for goal-driven RL using LMs. The inefficiency of non-frozen parameter baselines in learning may be attributed to several factors. These could include the quantity of network parameters, the variety of LMs, the extent of NL instructions, and the complexity of RL tasks. We would like to investigate these potential causes to facilitate the effective and efficient construction of natural language-conditioned agents. > 4. While it doesn't diminish the contributions of this work, what I expected was more like discussing how this could potentially benefit (or benefit from) the synthesis community and ease some domain overhead. We appreciate your feedback and clarification on the program synthesis perspective. One possible benefit of TALAR for the synthesis community is that it could provide a new way of generating programs from natural language that does not rely on semantic parsing or rule-based methods. For example, utilizing language model and neural networks to learn to generate PL from data (maybe we could prepare some task-specific descriptions/constraints in the dataset for training the neural network), which could enable more robust and efficient program synthesis for various domains and tasks. On the other hand, the program synthesis techniques could inspire new designs of task language. For example, program analysis techniques could be employed to automatically infer or optimize the structure and semantics of the task language. ------ Thanks again for the careful and timely response. We are glad to any further discussions.
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewers and chairs for their valuable time and constructive feedback on our paper. We have carefully considered each comment and provided detailed responses. Thanks to the insightful assessments from the reviewers, we have conducted a more thorough exploration about our method (Reviewer rdPR, nXhV), performed a comprehensive comparison with baselines (Reviewer KoSw, rFCy), further highlighted the effectiveness of our method (Reviewer rdPR, KoSw, rFCy), and improved the presentation regarding predicate representation (Reviewer rdPR, nXhV, KoSw, rFCy). The results of these experiments can be found in the attached PDF file. We also would like to clarify the advantage of our proposed IOL framework and its implementation, TALAR. Our framework has the capability to automatically develop task-specific language, which can subsequently be utilized in the RL phase to enhance the learning efficiency of the RL agent. In comparison to the baseline method, which utilizes a pretrained language model to encode natural language instructions, our approach produces more concise task language, enabling more efficient policy learning. Additionally, unlike methods that require a pre-defined structured representation, our approach eliminates the need for task-specific design, reducing the need for laborious efforts. We hope our response could address your concerns about our paper. Please let us know if you have any further questions or concerns. Pdf: /pdf/01588e396aaa71e5779a8941cf5248eae0af6991.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Global Structure-Aware Diffusion Process for Low-light Image Enhancement
Accept (poster)
Summary: This work introduces a low-light image enhancement based on diffusion model. By incorporating global structure-aware regularization, this work achieves more promising performances than existing methods. Strengths: By considering the global structure during the diffusion process, this work achieves better performance. Weaknesses: Despite its better performance, I am very sure that this work cannot advance this field. It does not solve a problem in a new scenario and it just make incrimental revision on top of established framework. My concerns for this work can be concluded as follows: 1. The authors claim that a naive implementation of the diffusion model alone is inadequate for effectively resolving the low-light image enhancement, while it does not prove this claim. All baselines considered in the experiments are non-diffusion based approaches. Thus, it is difficult to say whether a naive implementation is enough for the problem solving. 2. In line 151, the autors claim that the rank of matrix is an effective measurement of the global structures. This is a very important assumptions in this place. This part needs more experiments and justifactions. Or, the authors should provide very solid references for this claim in this place. 3. Except the matrix ranking, several approachs have been proposed to better represent the global structures including some deep learning features. The authors should compare with them. 4. Why is it important to use global structure in this place? If U-Net is a substantial structure of this proposed framework, it is able to capture the global structure during its network propagation. Thus, why do this work design an additional module to capture global structure. 5. I am not sure about the effectiveness of uncertainty in this place. I cannot find Table 5 mentioned in this manuscript. 6. This work just puts several different image processing modules together without enough insights. It is easy to come up with such an idea. The non-local solutions have been discussed in traditional solutions. I donot know why they choose to combine it with diffusion model. Besides, weighted loss has been considered for a large number of methods. Using it here cannot be considered as a new contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my question above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See my questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Responses to Reviewer 2x92]** ### ***1** Response to Weakness 1 (W1): Baseline Model* We refer the reviewer to the results of ablation studies depicted in Figure 5 on page 9, where the first case corresponds to the baseline diffusion model. It only achieves 26.02 dB, which is noticeably lower than the SOTA method SNR-Aware's achievement of 26.70 dB. ### ***2** Response to W2: Theoretical Soundness of Rank-based Regularization* First, we clarify that as stated in Lines 112 - 114, the ``global structure" means that the non-local similar patterns. We have indeed cited some related works [5, 48] in Sec 3.1.1 of the manuscript. Moreover, [41 , 16] also acknowledge the effectiveness of a matrix rank to model the image global structure. Besides, it is somewhat common sense that the rank of a 2D matrix measures the correlation between its elements in a global manner rather than an element-wise manner. ### ***3** Response to W3: Different loss Terms* We made comparisons with the perceptual loss, i.e., we replaced our rank-based regularization with the perceptual loss. The quantitative results listed in the following table show ours outperforms the perceptual loss to a large extent. We also refer the reviewer to **the second response to Reviewer gyFW** for the results of more loss terms. | Methods | PSNR | SSIM | LPIPS | |:--------------------------:|-----------|-----------|:----------| | Baseline | 26.02 | 0.859 | 0.123 | | Perceptual loss | 26.42 | 0.863 | 0.099 | | Our matrix rank modeling | **27.02** | **0.872** | **0.097** | ### ***4** Response to W4: Architecture or Loss Terms for Structure-aware Diffusion Process* We note numerous factors influence network performance, where the network structure represents merely one of these variables. A network may not reach its full potential without the appropriate configuration of these diverse training settings. While the U-Net architecture is adept at integrating feature maps across multiple resolutions, the vanilla loss doesn't thoroughly harness these properties. We also refer the reviewer to the **second response to Reviewer gyFW** for the detailed quantitative and theoretical analyses of introducing adaptive rank-based regularization into the diffusion model. For your convenience, we have pasted some of the content. " ..... While the current approaches often involve pixel-wise treatments, inherent global structures can be overlooked to some extent. Modeling such global structures potentially augments the performance [C3]. Additionally, conventional pixel-wise regularization terms, such as L1, L2, and SSIM, do not adequately encapsulate nonlocal structures. However, regularization within the feature domain is usually confined within a local region due to the kernel perceptual regions. Moreover, regularizing features can result in fluctuations in the back-propagated gradients, thereby impeding network training. Finally, the results in the following table also verify the above analyses, i.e., the necessity of introducing regularization between $X_{t-1}$ and $X_0$, and the advantage of our rank-based model. |Method| PSNR| SSIM| LPIPS| |--|--|--|--| | Baseline| 26.02| 0.859| 0.123 | | L1 Reg. | 26.45| 0.868| 0.101 | | L2 Reg. | 26.53| 0.869| 0.102 | | SSIM Reg.| 26.23| 0.870| 0.101| | Perceptual Reg.| 26.42| 0.863| 0.099| | Rank Reg. (Ours) | **27.02** | **0.872** | **0.097** | [C3] S. Gu, et al., Weighted nuclear norm minimization with application to image denoising, CVPR'14. " ### ***5** Response to W5: Missing of Table 5* Sorry for our mistake. Current Figure 5 indicates the ablation studies of different loss terms. We will correct the label in the final version. ### ***6** Response to W6: Contributions of the Method* **We beg to differ in your evaluation of our method. Our method is by no means an assembly of prior art.** Our major contribution lies in introducing global structure-aware regularization into the learning process of the diffusion model, which **Reviewer U5ii** acknowledges. Current diffusion models, with their naive loss terms, cannot fully capture the global properties. Thus, we introduce non-local patch-based matrix rank modeling to address such an issue. Moreover, our regularization method tends to minimize the trajectory curvature, which potentially helps image reconstruction [C1,C2] and leads to outstanding performance compared to SOTA methods. We refer the reviewer to the **second response to Reviewer gyFW** for the detailed analyses of the necessity and advancement of introducing rank-based regularization into the diffusion process for capturing the global structure and the theoretical analysis in the attached pdf file. Besides, it is also worth mentioning that, as discussed in the **first response to Reviewer U5ii** , the performance of our method is further improved significantly by applying advanced clustering methods for grouping non-local patches, i.e., the overall PSNR value increases about 0.7 dB, compared to those reported in our manuscript. We posit that our contributions will pave the way for fresh perspectives on diffusion models for low-level image processing tasks. And our method improves the SOTA performance of low-light image enhancement to a higher level, which will contribute to the community. [C1] S. Lee, et al., Minimizing trajectory curvature of ode-based generative models, ICML'23. [C2] X. Liu, et al., Flow straight and fast: Learning to generate and transfer data with rectified flow, ICLR'23. --- Rebuttal Comment 1.1: Title: Global structure Comment: Thanks for the rebuttal. If the global structure is an important contribution, have you ever done more experiments to validate the effectiveness of your proposed framework with other established ones. --- Reply to Comment 1.1.1: Title: Comparison with other diffusion-based low-light image enhancement methods Comment: We are pleased to hear back from the reviewer. Regarding the comparison with “other established ones,” we assume the reviewer is referring to the comparisons with other diffusion-based low-light image enhancement methods. Actually, we indeed made it. We refer the reviewer to the $4^{th}$ response to **Reviewer ts6H**, where we compared our method with the pyramid diffusion [C3], the latest diffusion-based method for low-light enhancement published in IJCAI’23. For your convenience, we have pasted some results here. | Methods | Architecture | Loss term | PSNR | SSIM | LPIPS | |------------|---------------|------------------------------------------|------|-----|----| | PyDiff [C3] | Pyramid | Vanilla with Multi-scale L1 | 27.090 | 0.879 | 0.100 | | Ours | Vanilla (U-Net) | Rank-based modeling with basic KMeans clustering | 27.336 | 0.874 | 0.097 | | Ours++ | Vanilla (U-Net) | Rank-based modeling with advanced Hierarchical clustering | **27.697** | **0.880** | **0.092** | Additionally, when comparing other augmentation strategies or loss terms, we've evaluated our method's performance against L1, L2, and Perceptual losses. Please see our 2nd response to **Reviewer gyFW** for further comparison details. |Method| PSNR| SSIM| LPIPS| |--|--|--|--| | Baseline| 26.02| 0.859| 0.123 | | L1 Reg. | 26.45| 0.868| 0.101 | | L2 Reg. | 26.53| 0.869| 0.102 | | SSIM Reg.| 26.23| 0.870| 0.101| | Perceptual Reg.| 26.42| 0.863| 0.099| | Rank Reg. (Ours) | **27.02** | **0.872** | **0.097** | Last but not least, we also want to note that regarding our main contributions, the motivation/findings of introducing regularization between $X_{t-1}$ and $X_{0}$ is also essential , as clarified in the $2^{nd}$ response to **Reviewer gyFW** [C3] Zhou, Dewei, et al. Pyramid Diffusion Models For Low-light Image Enhancement, IJCAI'23.
Summary: This paper presents an innovative and efficacious approach to regularization within the domain of diffusion models, with particular utility for low-light image reconstruction. The method ingeniously takes into account the latent global structure of images. To achieve this, an image is first divided into patches, and clustering algorithms are then employed to group analogous patches, where the matrix rank is further utilized to minimize the discrepancy between the intermediate reconstruction and the final output. Besides, uncertainty is used to boost performance. Strengths: 1) The novel perspective opens up a promising avenue for enhancing the performance of diffusion models, representing an important contribution to the field. Also, it may have great potential for other diffusion-based image/video processing tasks. 2) The introduction of this form of regularization is a groundbreaking shift for diffusion models, distinguishing this work from existing literature. 3) The experimental validations lend strong support to the efficacy of the proposed regularization term, and the proposed method achieves the current SOTA. 4) The paper is well-written, and the supplementary material includes the code. Weaknesses: See the detailed questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Are other advanced clustering algorithms more effective in enhancing reconstruction performance? This comparison could provide valuable insights into the optimal algorithm selection for this specific task. 2) For the ablation studies in Table 5 (Figure 5 should be Table 5), the authors are suggested to add two more settings, i.e., remove (b) and (c) and keep (a); remove (c) and keep (a) and (b), to directly validate the adaptive rank regularization, which is the main contributions. 3) In Figure 6, please add the input and GT images. It is better to use different symbols to distinguish the learnable closed-form samples and the input of the forward process. 4) Can other regularization terms (e.g., the L1 loss between X_0 and X_t) improve the original diffusion model? 5) There is potential for the proposed reconstruction algorithm to benefit diffusion models in tasks beyond the current scope of the study. It would be beneficial for the authors to explore this further, extending the validations to other potential applications. At least the authors can discuss this issue to improve the paper. 6) There are some grammar errors, e.g., line 163: a an Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Maybe a little additional computation consumption during the training phase. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Responses to Reviewer U5ii]** ### ***1** Response to Weakness 1 (W1): Clustering Algorithm* Thanks for your valuable comment. As shown in the following table, our method benefits from advanced clustering algorithms greatly, and the PSNR value is further improved by 0.7 dB, compared with K-means adopted in our manuscript. These advancements further underscore the prospective utility of our regularization terms in enhancing diffusion models. | Clustering algorithms | PSNR | SSIM | LPIPS | |:-------------------------:|:------|-------|-------| | K-Means | 27.02 | 0.872 | 0.097 | | Spectral clustering | 27.41 | 0.876 | 0.095 | | Gaussian Mixture Model | 27.55 | 0.877 | 0.095 | | Hierarchical clustering | 27.70 | 0.880 | 0.092 | ### ***2** Response to W2: Ablation Settings* Thanks for the comments. We experimentally verified the importance of our adaptive rank regularization according to the suggested settings. As shown in the following table, our approach demonstrates a significant boost in the baseline, elevating it from 26.02 dB to 27.02 dB, when eliminating (c) uncertainty-guided regularization. This outcome strongly suggests that our major contribution lies in the introduction of adaptive rank regularization. | (a) | (b) | (c) | PSNR | SSIM | LPIPS | |----------|----------|----------|:------|:-------|:-------:| | ✕ | ✕ | ✕ | 26.02 | 0.8593 | 0.1226 | | ✓ | ✕ | ✕ | 26.63 | 0.8701 | 0.1016 | | ✓ | ✓ | ✕ | 27.02 | 0.8722 | 0.0973 | | ✓ | ✓ | ✓ | 27.34 | 0.8739 | 0.0969 | ### ***3** Response to W3: Symbol Issue* Thanks for the comments. we will certainly add corresponding images to have a better comparison. For symbolic representations, we intend to modify the learnable closed-form sample to enhance its difference from the input. ### ***4** Response to W4: Other Regularization Terms* As experimentally verified in the table of the **second response to Reviewer gyFW**, other regularization terms, such as L1 and L2, can also enhance the baseline, e.g., the L1 regularization could improve the baseline diffusion model from 26.02 dB to 26.45 dB, and the L2 regularization from 26.02 dB to 26.53 dB. However, their effectiveness is still much lower than our rank-based regularization improving the baseline diffusion model from 26.02 dB to 27.02 dB, because both L1 and L2 regularization manners fail to fully capture nonlocal structures or patterns within an image and cannot explicitly characterize the properties of the structure. We also refer the reviewer to the response for more detailed analysis. ### ***5** Response to W5: General Validation* As shown in the **first response to Reviewer 9ew6**, we also validated the effectiveness of the proposed regularization terms on the image super-resolution task. Upon experiments, it is evident that, by incorporating the proposed regularization term, our approach significantly surpasses the state-of-the-art (SOTA) diffusion SR method by a margin of 0.51 dB. It is widely recognized that SR represents a cornerstone in the domain of image reconstruction. Consequently, this enhancement underscores the profound effectiveness and potential of our methodologies. Besides, in the conclusion section of the final version, we will discuss its potential in other low-level image processing tasks as future work. ### ***6** Response to W6: Typos* Thanks for the comments. We will correct those typos in the final version. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I greatly appreciate the authors for their exceptional efforts during the rebuttal phase. I am pleased to note that all of my concerns have been thoroughly addressed. The remarkable achievement of significantly enhancing performance through the utilization of advanced clustering algorithms for grouping image patches in adaptive rank-based regularization is truly impressive. I am also impressed by the authors' clear and well-justified explanations regarding the motivation behind the proposed method, as highlighted in their second response to **Reviewer gyFW**, as well as the superior performance demonstrated compared to state-of-the-art methods. Additionally, after carefully reviewing the comments from my fellow reviewers, I find that the authors' responses effectively address all the questions raised. Drawing from my five years of experience in this field, I hold great confidence in the novelty of the proposed method and believe that its elevated performance level will significantly contribute to the advancement of this research area. Therefore, I believe the paper quality sufficiently meets NeurIPS's standard although I also would like to see the other reviewers’ responses. Furthermore, as previously mentioned, I am convinced that the idea presented in this paper holds potential for other diffusion-based low-level image/video processing tasks, which is somewhat validated by the additional experiments conducted on image super-resolution, as detailed in the authors' response to **Reviewer 9ew6**. In preparing the final version, I highly recommend the authors incorporate the necessary responses as discussed during the rebuttal phase. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our work and effort. We will ensure that the final version will include all the necessary information to make it comprehensive and complete. Thanks.
Summary: Aiming at better low-light image enhancement performance with diffusion models, this paper proposes a global structure-aware regularization utilizing the intrinsic non-local structural constituents of image data. An uncertainty map is incorporated into the diffusion model to ease the strict constraints on indeterminate regions. Experimental results demonstrate the effectiveness of the proposed components. Strengths: 1. The paper incorporates global structure regularization via matrix rank modeling into the diffusion models, showing significant performance improvement compared with state-of-the-art methods. 2. The paper is well-written, and the presentation is pretty good. Weaknesses: 1. The contribution is not convincing enough. The generality of the proposed global structure-aware regularization based diffusion models is not verified, and the main idea of the adopted uncertainty is not original in this paper. 2. Additionally, the main concern about this paper is that there are many unclear details. For example, what train data is used in this paper? What is the baseline setting in Table.5? Is the inference time of all the methods also performed on RTX 3080 GPUs? 3. More perceptual metrics (e.g., LPIPS, MUSIQ) could be reported for better evaluation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the strengths and weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Several challenging cases are provided. The limitations of the proposed approach sholud be discussed in more detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Responses to Reviewer 9ew6]** ### ***1** Response to W1: Paper Contributions* Thanks for the comments. We conducted additional experiments in terms of image super-resolution ($16\times$ SR on CelebA-HQ dataset) to validate the generality of our method on different tasks. The quantitative results in the following table show that our global structure-aware regularization can improve the reconstruction performance, demonstrating its generality to some extent. The results of the SOTA baseline [C1] are also provided for comparison. | Methods | PSNR | SSIM | |------------------------------------------------|-------|-------| | IDM [C1] | 24.01 | 0.710 | | Ours w/o global structure-aware regularization | 23.94 | 0.713 | | Ours | 24.52 | 0.728 | Besides, it is also worth mentioning that as discussed in the **first response to Reviewer U5ii**, the performance of our method is further improved significantly by applying advanced clustering methods for grouping non-local patches, i.e., the overall PSNR value increases about 0.7 dB, compared to the other clustering algorithms. We also refer the reviewer to the **second response to Reviewer gyFW** for the detailed analysis of our method and additional experimental results of various regularization terms. In all, we posit that our contributions will pave the way for fresh perspectives on diffusion models for low-level image processing tasks. And our method improves the SOTA performance of low-light image enhancement to a new higher level, which will contribute to the community. [C1] S. Gao, et al., Implicit diffusion models for continuous super-resolution, CVPR'23. ### ***2** Response to W2: Setting Details* Sorry for the confusion caused. We will complement the details. 1) We kindly remind the reviewer to refer to Section 4.1 of the manuscript for the details of the used datasets. For example, we utilized 485 pairs of low/normal-light images for training and 15 pairs for testing on the LOLv1 dataset. 2) The baseline was constructed by removing the non-local patch-based matrix rank modeling and uncertainty-guided regularization. The baseline corresponds to the basic diffusion model. 3) The inference time per image of all the recent SOTA methods in Table S1 of Supplementary Material was conducted using an RTX3080 GPU on the same server. ### ***3** Response to W3: Comparison of More Perceptual Metrics* In the manuscript, we have indeed provided the LPIPS comparisons in Table 1. For better evaluation, we further adopted MUSIQ (trained on KonIQ-10k) for assessing the enhancement quality. The following table shows quantitative comparisons of the recent SOTA methods in terms of MUSIQ, which consistently demonstrate the superiority of our method. | Datasets | SNR-Aware | LLFormer | Ours | |--------------|-----------------|---------|--------| | LOLv1 | 61.00 | 58.26 | **71.74** | | LOLv2-real | 57.76 | 51.13 | **69.34** | | LOLv2-synthetic | 63.18 | 62.84 | **64.51** | --- Rebuttal Comment 1.1: Title: Looking forward to hearing from you Comment: Dear **Reviewer 9ew6** Thank you for taking the time to review our submission and providing us with constructive comments and a favorable recommendation. We would like to know if our responses adequately addressed your earlier concerns. Additionally, if you have any further concerns or suggestions, we would be more than happy to address and discuss them to enhance the quality of the paper. We eagerly await your response and look forward to hearing from you. Best regards, The authors --- Rebuttal Comment 1.2: Comment: Thanks for the rebuttal. For W2-1, my concern is whether the proposed method is trained only on LOL-v1 or separately on LOL-v1 and LOL-v2 due to the ambiguity stated in Section4.1. If the latter is the case, then for fair comparison, the methods in Table1 that only provide pretrained weights on LOL-v1 should be pointed out separately. Also, will you release the source code to ensure reproducibility? --- Reply to Comment 1.2.1: Comment: We appreciate the feedback received. In accordance with the settings outlined in **the very recent** works, i.e., SNR-Aware (CVPR'22), LLFlow (AAAI'22), and LLFormer (AAAI'23), we also trained our models on LOLv1 and LOLv2 datasets separately. Regarding the remaining methods, we derived the most promising results from relevant papers (e.g., LLFlow, LLFormer, and SNR-Aware), ensuring consistent settings for fair comparisons. We will make this clear in the final version. Besides, it is worth noting that **we have conscientiously included the source code within the supplementary material**. Additionally, we will make our code and pre-trained models publicly available, with the aim of facilitating others in reproducing our results.
Summary: This work proposes a diffusion-based low-light image enhancement framework that exhibits good performance in different benchmarks. A rank-informed regularization term during training and uncertainty-weighted loss is proposed. Strengths: - This work proposes a diffusion-based low-light image enhancement framework that seems to exhibit good performance in different benchmarks. Weaknesses: - The motivation of the "Non-local patch-based matrix rank modeling" seems not clear. According to my understanding, the proposed "Non-local patch-based matrix rank modeling" is only used during training. How does it make the model have the capacity to be aware of the global structure during inference? Besides, during the training using paired training data, what's the necessity of using the proposed method to measure the structure consistency between Xt−1 and X0? The difference be directly calculated using metrics like L1, L2, SSIM, or feature-level similarities. - How does the proposed method increase the training overhead? - Some formulation is not self-contained. For example, where is the definition of "\alpha_1^2" in Eq. (4)? How to set the value of k_t? - It's better to illustrate which dataset/subset of the dataset is used to train the proposed method and other competitors. Since it greatly affects the performance of the model. And including a comparison of inference cost is better. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Please refer to the weakness section. - While the weighting strategy of the training loss is reasonable for idea cases. But for real-captured high-resolution images where small misalignments may exist, will it lead to some side effects? - Could you explain why the constraint need only be added to the singular value, while no need for the singular vector? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Responses to Reviewer gyFW]** ### ***1** Response to Weakness 1 (W1): Relation between Global Structure Modeling and Non-local Patch Rank Modeling* It is imperative to highlight that by "global structure-aware," we are referring to the network's ability to account for various patterns, such as non-local patches, rather than solely focusing on individual pixels. By regularizing the matrix rank formed from a cluster of these non-local patches, our model can concurrently consider structures from various positions, resulting in a strong capability of global structure modeling. Second, the characteristics and properties of a neural network are primarily acquired during the training phase. In our approach, we incorporate specific regularization to encourage the network weights to reconstruct the global structure. During inference, the network can still effectively exhibit the learned function, by the established weight pattern in the training phase. ### ***2** Response to W2: Theoretical Soundness* We answer this question from two levels: why apply regularization between $X_{t-1}$ and $X_0$, and then why choose the rank-based global structure-aware regularization. **1) Analysis of applying regularization between $X_{t-1}$ and $X_0$, from two distinct perspectives.** a) **Trajectory Curvature**. Recent works [C1, C2] show that an efficient and effective ODE-based generative model, e.g., diffusion models, should have a **lower curvature (straight) trajectory**, which could help them stably converge to high-quality solutions. Our regularization term could indeed minimize the trajectory curvature. We further illustrate such properties by corresponding experimental results of the reverse trajectory curve in the additional pdf file. b) **Accumulated Error**. During inference, each step is established on the previous prediction. Thus, accumulated errors inevitably exist. Through regularizing $X_{t-1}$ towards the ground-truth, we expect that each reconstruction step would consistently move towards GT images, which may also potentially minimize the influence of such accumulated error. **2) Analysis of choosing rank-based global-structure aware regularization** While the current approaches often involve pixel-wise treatments, inherent global structures can be overlooked to some extent. Modeling such global structures potentially augments the performance [C3]. Additionally, pixel-wise regularization terms, such as L1, L2, and SSIM, do not adequately encapsulate nonlocal structures. However, regularization within the feature domain is usually confined within a local region due to the kernel perceptual regions. Moreover, regularizing features can result in fluctuations in the back-propagated gradients, thereby impeding network training. Finally, the following table also verifies the above analyses, i.e., the necessity of introducing regularization between $X_{t-1}$ and $X_0$, and the advantage of our rank-based model. |Method| PSNR| SSIM| LPIPS| |--|--|--|--| | Baseline| 26.02| 0.859| 0.123 | | L1 Reg. | 26.45| 0.868| 0.101 | | L2 Reg. | 26.53| 0.869| 0.102 | | SSIM Reg.| 26.23| 0.870| 0.101| | Perceptual Reg.| 26.42| 0.863| 0.099| | Rank Reg. (Ours) | **27.02** | **0.872** | **0.097** | [C1] S. Lee, et al., Minimizing trajectory curvature of ode-based generative models, ICML'23. [C2] X. Liu, et al., Flow straight and fast: Learning to generate and transfer data with rectified flow, ICLR'23. [C3] S. Gu, et al., Weighted nuclear norm minimization with application to image denoising, CVPR'14. ### ***3** Response to W3: Training Overhead* Our method significantly improves the quality of enhanced images by exploring global structures. Still, it simultaneously introduces little additional computational consumption, only about 0.03 extra seconds (from 0.105 to 0.135 S) to train for one iteration on the LOLv1 dataset. Besides, no additional computational cost is introduced during inference. ### ***4** Response to W4: Notions* Sorry for the confusion caused. $\alpha_t$ in Eq. (4) is the same as the $\alpha_t$ in Eq. (2) and **Algorithm 1**, which is defined in Line 88. Moreover, $k_t$ means $\alpha_t^2$. We will clarify them in the final version. ### ***5** Response to W5: Training Dataset* In alignment with existing methods under comparison, we conducted the network training utilizing the respective training datasets. For instance, during the LOLv1 testing experiment, we base the network's training on the LOLv1 training set. Additionally, for a comprehensive understanding of the inference cost, **we kindly refer the reviewer to the first section of the supplementary.** ### ***6** Response to Question 1 (Q1): Treating Misalignments* With misalignments, a decline in reconstruction performance across all methods might be observed. However, generative models, such as diffusion models, just rely on conditioning from input images rather than directly translating inputs to the reconstruction, as seen in regression-based methods. Therefore, their requirements for alignment might be less stringent. Furthermore, the uncertainty module in our method may potentially address misalignments by moderating the regularization on misaligned pixels. ### ***7** Response to Q2: Why not Regularize Singular Vectors* We regularize the global structure consistency between two images. With such a regularization term, it is expected that the similarity of different non-local patches in $X_{t-1}$ and $X_{0}$ should be identical, or a same component proportions of different images. Thus, by approaching singular values, we could minimize the gaps between $X_{t-1}$ and $X_{0}$. Note that simultaneously regularizing the singular values and vectors to be the same is equal to regularizing all patches to be pixel-wise similar. Moreover, as listed in **the second response to Reviewer gyFW**, the L1 regularization produces inferior performance. Thus, we do not further regularize singular vectors. --- Rebuttal Comment 1.1: Title: Looking forward to hearing back from you Comment: Dear **Reviewer gyFW** Thank you for taking the time to review our submission and providing us with constructive comments. We would like to inquire if our responses have adequately addressed the concerns you raised earlier. Additionally, if you have any further concerns or suggestions, we would be more than happy to address and discuss them in order to enhance the quality of the paper. We eagerly await your response and look forward to hearing from you. Best regards, The authors --- Rebuttal Comment 1.2: Comment: Thanks for your reply. How did you test the LOLv2-real dataset since according to the attached code the model is not adapted for the image with the resolution 400*600? Besides, the name of the released checkpoint is "LOLv2-syn.pth". Does it mean you train two separate models for LOLv2-real and LOLv2-syn? --- Reply to Comment 1.2.1: Title: Official Comment by Authors Comment: We appreciate the feedback received. For testing on LOLv2-real, we resized the input image (400×600) by padding it to the size of 416×608 and cropped the output back to its original dimensions. Besides, we indeed train two separate models for the LOLv2-real and LOLv2-syn datasets, consistently following the settings outlined in the very recent work, i.e., SNR-Aware (CVPR'22), which conducted extensive experiments on these datasets. Lastly, we will update and make our code and pre-trained models publicly accessible to facilitate the reproduction of our results by others. We sincerely thank you once again for the efforts to assist in improving the quality of our work.
Rebuttal 1: Rebuttal: ### General Purpose We thank all reviewers for your time, constructive comments, and recognition of our work. We believe all concerns have been clearly and directly addressed. Here we also want to summarize a few key clarifications concerning the contributions of our work. Our **MAJOR** contribution lies in introducing an adaptive rank-based regularization term to promote the baseline diffusion model to capture the global structure. Specifically, we aim to minimize the discrepancy between $X_{t-1}$ and $X_{0}$ to establish the reverse trajectory with reduced curvature, which can potentially benefit high-quality samples generation [C1, C2]. Moreover, during this process, we incorporate rank-based modeling focused on non-local similar patches, thereby exploiting the global structure modeling capabilities in diffusion models. We have experimentally and precisely validated the necessity of regularization between $X_{t-1}$ and $X_{0}$, as well as the effectiveness of rank-based modeling. See the figures in the submitted pdf file and the second response to **Reviewer gyFW** for the detailed analyses. Besides, as experimentally verified in the **first response to Reviewer U5ii**, by simply leveraging advanced clustering algorithms for grouping image patches, the PSNR value of our approach further increases about 0.7 dB, emphasizing the potential utility of our regularization terms in enhancing diffusion models. We posit that our contributions will pave the way for fresh perspectives on diffusion models for low-level image processing tasks. And our method improves the SOTA performance of low-light image enhancement to a new higher level, providing a promising benchmark to this community. Last but not least, **we will make the reviews and author discussion public regardless of the final decision**. Besides, we will include the newly added experiments and analysis in the final manuscript/supplementary material. Pdf: /pdf/5072bd300363a3fddf5179dc1e725367671174d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a diffusion-based framework to enhance low-light images. They propose a global structure regularization, which leverages the intrinsic non-local structural constituents of image data, besides, they introduce an uncertainty-guided regularization technique, which relaxes constraints on the most extreme portions of the image. The result outperforms some SOTA methods in some low-light image enhancement datasets. Strengths: The paper proposes a global structure-aware regularization scheme, which capitalizes on image intrinsic non-local structures and gradually adjusts the regularization strength according to the sampling steps. The method achieves good results in the LOL dataset. Weaknesses: My main concern is about the novelty of the method, the uncertainty-guided regularization seems from[4] Although the author has done comparison experiments in some datasets. I think the number of these images is quite limited, I hope the paper can provide a comparison experiment in the Adobe-FiveK dataset[1], following the setting of[2] The method needs to resample 500 steps. it will cost a long time to infer an image. The paper should cite and compare with[3]. [1] Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs [2] Underexposed Photo Enhancement Using Deep Illumination Estimation [3]Pyramid Diffusion Models For Low-light Image Enhancement [4]. Uncertainty-driven loss for single image super-resolution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why does the method choose DDPM? How about DDIM? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See in weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **[Responses to Reviewer ts6H]** ### ***1** Response to Weakness 1 (W1): Novelty of the Method* We would like to underscore that the **MAJOR** innovation presented in our paper is the integration of global structure regularization into diffusion models. We also refer the reviewer to the **second response to Reviewer gyFW** for detailed analysis. Furthermore, by leveraging sophisticated clustering algorithms for grouping non-local patches, the overall PSNR of our method is further improved to 0.7 dB, convincingly attesting to the importance and efficacy of regularizing global structures (see **the first response to Reviewer U5ii**. We posit that our contributions will pave the way for fresh perspectives on diffusion models for low-level image processing tasks, as discussed by **Reviewer U5ii**. In the manuscript, we have indeed acknowledged that our work is inspired by [4]. As the first time, we demonstrate that uncertainty-aware regularization is a simple yet effective way to boost the performance of the diffusion process under low-light image enhancement. But we also agree that this contribution is somewhat minor. We believe the major contribution meets the NeurIPS's standard based on its novelty and impressive performance. In the final version, we will re-summarize the contribution part to emphasize the key contribution and list the uncertainty as a minor contribution. ### ***2** Response to W2: Evaluation Dataset* Following your valuable suggestion, we conducted the experiments on the Adobe-FiveK dataset in the same setting as [2]. The quantitative results shown in the following table still verify the advantages of our method. We will add further discussion and review of [1,2] in the final version. | Methods | PSNR | SSIM | |--------------|-------|-------| | DeepUPE [2] | 23.04 | 0.893 | | Ours | 23.77 | 0.912 | ### ***3** Response to W3: The Number of Inference Steps* Our diffusion model takes only **10 steps for inference**, as described in supplementary material. 500 steps are only utilized for training. It is essential to recognize that the use of multiple sampling steps is not an attribute exclusive to our method, but rather a common issue across all diffusion methods. Moreover, some methodologies have been developed with the aim of reducing the steps required within the diffusion model. In addition, it is pivotal to highlight that the inference method employed in our methods operates in alignment with a standard diffusion model. Consequently, this allows for the integration and utilization of advanced fast-sampling techniques, as detailed in references [C1]. [C1] C. Lu, et al., Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps, NeurIPS'22 ### ***4** Response to W4: Related Work* It is worth noting that the mentioned work [3] was accepted in IJCAI'23 and first available in arXiv on 2023 May 17, which definitely coincided with the paper submission deadline for NeurIPS'23. Thus, it should be considered as a concurrent work with ours. But we made comparisons with it as its test code is available. Under the same test settings as ours, we obtained the results on LOLv1 by running the publicly released pre-trained model that was trained with the same training dataset as ours. As shown in the following table, the advantage of our method is still verified. | Methods | Architecture | Loss term | PSNR | SSIM | LPIPS | |------------|---------------|------------------------------------------|------|-----|----| | PyDiff [3] | Pyramid | Vanilla with Multi-scale L1 | 27.090 | 0.879 | 0.100 | | Ours | Vanilla (U-Net) | Rank-based modeling with basic KMeans clustering | 27.336 | 0.874 | 0.097 | | Ours++ | Vanilla (U-Net) | Rank-based modeling with advanced Hierarchical clustering | **27.697** | **0.880** | **0.092** | Additionally, we also want to note that our approach enhances the diffusion process through a plug-and-play regularization term rather than the modification of the network structure adopted in [3]. As a result, there exists potential for integrating our regularization term into [3] to achieve superior reconstruction. We anticipate exploring this integration once the training code from [3] is released. ### ***5** Response to Question 1 (Q1): DDPM or DDIM* DDPM has shown its great capability in producing high-quality images, to explore the effectiveness of our global structure-aware regularization within diffusion models, we thus integrate our regularization terms into this foundational diffusion model, exhibiting superior performance relative to state-of-the-art (SOTA) methods. This clearly highlights the potential of our approach. With regard to DDIM, it aims to accelerate the sampling process by generalizing DDPM using non-Markovian diffusion processes. In essence, DDIM can be considered as a generalized version of DDPM, and they have the same training procedure. Consequently, we are confident that our regularization terms could be seamlessly integrated into DDIM, offering significant potential for achieving exceptional performance. --- Rebuttal Comment 1.1: Title: Looking forward to hearing from you Comment: Dear **Reviewer ts6H** Thank you for taking the time to review our submission and providing us with constructive comments and a favorable recommendation. We would like to know if our responses adequately addressed your earlier concerns. Additionally, if you have any further concerns or suggestions, we would be more than happy to address and discuss them to enhance the quality of the paper. We eagerly await your response and look forward to hearing from you. Best regards, The authors --- Rebuttal Comment 1.2: Comment: Thanks for your rebuttal. I think you should add a comparison with MAXIM: Multi-Axis MLP for Image Processing(CVPR' 22), which can achieve 26+ PSNR on the AdobeFiveK. It seems to have a 2+ db gap. --- Reply to Comment 1.2.1: Comment: We appreciate the feedback received. In light of the previous comments, we conducted a comparative analysis in alignment with the original configuration proposed by [2]. Our evaluation encompassed **all 500 test images at their native resolutions**. It's noteworthy that MAXIM compared **only 400 test images and subjected them to cropping and resizing to the resolution of $512 \times 512$**. Consequently, **drawing a fair comparison with their published results proves difficult**. Given our time constraints, our present model on the FiveK dataset has been adjusted in a preliminary manner. With a more meticulous calibration, there is potential for enhancing our results further. It is also worth mentioning that we are devising global structure-aware loss terms to explore the potential of the diffusion model. Thus, advanced backbones, e.g., the mentioned MAXIM, could be integrated into our framework to replace the vanilla U-Net for further performance improvement.
null
null
null
null
null
null
Chatting Makes Perfect: Chat-based Image Retrieval
Accept (poster)
Summary: The paper tackles the problem of chat-based image retrieval. The goal is that the system, being chat-based and powered by a LLM, engages in a conversation with the user to elicit information, in addition to an initial query, in order to clarify the user’s search intent by asking follow up questions. These questions form a dialog with the user in order to retrieve the desired image from a large corpus. The authors also suggest an evaluation protocols suitable for continual progress and assessment of questioners using a Visual Dialog model in place of a human and test their framework on real human interactions which involves collecting answers from users and further evaluate the method against strong baselines generated from prior art. Strengths: The paper tackles an interesting problem and proposes an interesting setup as opposed to the text-image retrieval task. If all data and the protocol is made available, I think this can be very valuable for the research community. Weaknesses: I really like the overall goal of the paper and I think it's an interesting direction. However, I have few concerns in terms of clarity (please see questions below) that I think need to be further detailed in the paper. Also, what I think it's lacking the current version is a comparison with classical text-image retrieval methods. If I understood correctly, the method can be applied to any dataset, so I think that applying it to some common benchmarks for text-image retrieval and showing how the dialog rounds affect the results can further validate this approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are there other methods than BLIP2 that can be used to answer the questions? Based on the results from Fig 3, up to a certain dialog round of 4-5 there is no difference between human answers and BLIP2. Do the authors have any insights on what happens beyond that point? 2. Did the humans have access to the image when answering the questions? 3. What data do you use for evaluation since it's not clear? VisDial? 4. Will all the data and all the evaluation protocol be made available online? 5. Can you elaborate a bit on the Unanswered setup? The model has available the caption + several questions but no answer when making the retrieval? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Various limitations are discussed throughout the paper and there the societal impact is somehow discussed, but briefly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer **L59s**, thank you for your insightful feedback! **Lack of comparison with classical text-image retrieval methods:** Please see the global part of the rebuttal and the figures in the accompanying pdf, where we address this concern. ### Answers to questions: ### 1. **“Are there other methods than BLIP2 that can be used to answer the questions?”**: Theoretically, any generative Visual-Dialog model might serve as an answerer. We chose the SoTA model for this task (BLIP2). **“Fig 3, up to a certain dialog round of 4-5 there is no difference between human answers and BLIP2. Do the authors have any insights on what happens beyond that point?”**: Our analysis shows that human answers are longer (see analysis in lines 29-34 in suppl. material) implying that humans tend to volunteer additional information, while BLIP2 provides short and concise answers (e.g. Fig 7, and suppl. Fig 5). Another source of this discrepancy might be the less accurate and less elaborate BLIP2 answers at long dialogues. 2. **“Did the humans have access to the image when answering the questions?”**: Yes, in all experiments that involved ground truth human answers, users had access to the image. 3. **“What data do you use for evaluation since it's not clear? VisDial?”**: Yes, our evaluation was primarily based on different measures for the image retrieval task. We use the VisDial dataset (lines 55-59), where each image is associated with a dialog. Thanks to VisDial we are able to test ChatIR in many different scenarios, such as combining a human questioner and a machine or human answerer. We test these different dialog options and measure the retrieval capabilities of images from the pool (50k). As mentioned earlier, for this rebuttal we also use COCO and Flickr30k with synthetic dialogues . 4. **“Will all the data and all the evaluation protocol be made available online?”**: Yes, the entire data and protocols will be made available upon acceptance. 5. **“Can you elaborate a bit on the Unanswered setup?”**: Following the description in lines 194-200, in this setting the *questioner* (ChatGPT) is provided with the caption only. It is asked to generate at once 10 different questions based solely on the caption (without seeing any answers). Thus, although the retrieval is conducted using the full dialogues (questions and answers), in this setting the answers have no effect on question generation in the dialog. We will clarify this in the revision. This experiment also shows the answers’s influence on the question generation, namely, questions conditioned on the prior dialogue history are more effective for retrieval. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I have acknowledge that I read the rebuttal. The authors have addressed some of my concerns. I think that the setup is interesting, hence I am raising my score to Borderline Accept. I still think that some more comparisons with other methods, especially a comparison with classical retrieval methods would be beneficial in better understanding the new retrieval setup. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We compared our method to CLIP and BLIP, which are considered SoTA with respect to all previous methods. However, we would be happy to compare with more methods that seem to be relevant. We welcome any suggestions as to which classical methods to compare with.
Summary: In this paper, the authors present a dialog-based image retrieval system and show strong performance against baseline models. Strengths: 1. The author's attempt to augment the image retrieval process with dialogue is interesting and largely under-explored. 2. The paper is well-written and easy to read. Weaknesses: 1. The authors are largely missing out on what type of dialogue one needs to have (given the caption) such that it helps to better visualize the caption. Why one even needs to do that? Is it because the captions are not useful/detailed? 2. The experiments are solely designed around the availability of instruction-tuned LLMs (for question generation), BLIP2 (for answering), and the VisDial dataset. Once you answer my previous comment, please justify why VisDial (alone) is enough for this experiment. Are instruction-tuned LLMs generating the type of questions you expected? 3. The paper is not well motivated. Specifically, the need to have a dialogue for image retrieval. Though the results are encouraging the reason why we are doing these experiments in the first place is not clear. I request the authors to give some more thought to this. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Check my comments for weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: It is not clear how this works could be expanded beyond the current setup majorly because it is not very clear what type of dialogue one needs to have with the system. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer **34K9**, thank you for your insightful feedback! 1: **Regarding the motivation and “What type of dialogue one needs to have (given the caption) …”**: The motivation is that, typically, a short query is not sufficient to retrieve the correct images from a large corpus, certainly not in a single attempt, as we discussed in lines 27-30 and 35-39. Commonly, people search an image with a short description, that might fail to fully convey the search intent, or sometimes it may result in many images that comply with the same description (please see Fig 4,5 and examples in suppl. material). Our idea is to engage in a conversation with the user in order to elicit additional information, and clarify the user’s search intent (lines 35-39). Gradually eliciting and accumulating the information from the user and being able to process it in a unified way is the essence of our ChatIR. **Type of questions:** Since the end goal of ChatIR is to retrieve the image, we expect the questioner to generate questions that when answered, improves the retrieval results (which is the instruction we give to ChatGPT). These questions typically relate to color, location, time of day, number of certain objects and more (see examples in Figs 4 and 5 in the main paper, and Fig. 1-5 in suppl. for LLM and human questioner). We show that such dialogues are able to significantly boost the retrieval performance (lines 173-176 and 208-210). Please also see the global part of the rebuttal and the figures in the accompanying pdf, where we discuss the advantages of our approach. 2.1: **“please justify why VisDial (alone) is enough for this experiment”**: Since our task requires data consisting of images paired with dialogues, we use the VisDial dataset that contains such annotations (although it was annotated for a different task), as we describe in lines 55-59. Thanks to VisDial we are able to test ChatIR in many different scenarios, such as combining a human questioner and a machine or human answerer. These results are presented in Fig 2 (paper) and Fig 8 (suppl. material). 2.2: **“Are instruction-tuned LLMs generating the type of questions you expected?”**: Yes. The end goal of ChatIR is to retrieve the image, thus we expect the questioner to generate a question that when answered, improves the retrieval results. This is indeed the case as our evaluations show (Fig 2, 3 and in suppl. material). More specifically, as we discuss in Sec 4.1, some Instruction-tuned LLMs generate the type of questions we expected (e.g. ChatGPT, see examples in Fig 4, 5 and Suppl.material). On the other hand, some others (e.g. FLAN-T5, FLAN-ALPACA) struggle with long context, which results in question repetitions (lines 224-231) resulting degraded performance (Fig 2). 3: **Motivation** Please see our response above
Summary: This study introduces ChatIR, a chat-based image retrieval system that engages in a conversation with the user to clarify their search intent and retrieve the desired image from a large corpus. The system leverages Large Language Models to generate follow-up questions to an initial image description and achieves a success rate of over 78% after 5 dialogue rounds, compared to 75% when questions are asked by humans and 64% for a single shot text-to-image retrieval. Strengths: The strength of this submission lies in its clear and compelling motivation, which focuses on utilizing chat interactions to refine search and enhance image retrieval. The authors effectively articulate the significance of this research direction, highlighting the potential of chat-based interactions to improve the accuracy and relevance of image search results. Additionally, the submission features a nice illustration that visually communicates the proposed approach, providing a clear representation of the underlying concept. This visual aid aids in understanding the methodology and reinforces the clarity of the paper. Weaknesses: One notable weakness of the submission is the lack of baseline comparison. While the paper introduces a novel conversation-based setup for image retrieval, it only reports results within this specific setup. The absence of results on the traditional single-hop text-to-image retrieval using the same dataset raises concerns about the necessity of introducing conversation into the retrieval process. Without a comparison to traditional single-hop methods, it is difficult to fully understand the advantages and potential improvements offered by the conversation-based approach (what if traditional methods already achieve comparable success rates?). Such comparisons would help address the question of why the system should be made more complex with conversation, and provide a stronger rationale for the proposed approach. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1) Missing reference for "common practice in image retrieval" in line 148. (2) The readability of Section 4 is currently hindered by the complexity of the annotations, such as the format of "Q: XXX & A: YYY." To enhance readability, I recommend using a different font or formatting approach to simplify the annotations in the revised version. Simplifying the annotations will make the section more accessible and easier to follow for readers, ensuring a smoother comprehension of the methodology and findings. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have included discussions of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer **Z6DK**, thank you for your insightful feedback! **The absence of results on the traditional single-hop text-to-image retrieval:** Please see the global part of the rebuttal and the figures in the accompanying pdf, where we address this concern. Q1: Missing reference: Thank you. We will add it. Q2: Readability Q: XXX & A: YYY: We appreciate this suggestion and will change it in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for providing the rebuttal response. I acknowledge that I have read the response and the additional material in the provided PDF.
Summary: This paper proposes a chat-based image retrieval framework, which can clarify users’ search intent. Authors design a question generation model based on LLM to generate questions based on dialog history. After user answer the question, an image retriever which is a transformer model is trained to extract text embedding to search image. Authors also use a LLM to answer the questions taking the place of users for fast training and evaluation. Authors use existing dataset to evaluate the method. Strengths: + The proposed framework is useful for clarifying user search intent, thus it’s practically valuable. + The proposed framework is well evaluated. + Authors use a existing dataset to avoid collecting a new dataset. Weaknesses: --- Components of the proposed framework are existing models, this weakens the novelty of this paper. --- Lack comparison with sota image-text retrieval methods on image-text retrieval datasets in experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer **6Lfv**, thank you for your insightful feedback! **Lack comparison with SoTA image-text retrieval methods on image-text retrieval datasets in experiments:** Please see the global part of the rebuttal and the figures in the accompanying pdf, where we address this concern.
Rebuttal 1: Rebuttal: Dear Reviewers and ACs, We were happy to see that the reviewers have found that our paper presents: “interesting problem and proposes an interesting setup” (L59s), “process with dialogue is interesting and largely under-explored” (34K9), The strength of this submission lies in its clear and compelling motivation (Z6DK), The proposed framework is practically valuable and well evaluated (6Lfv). Your input is instrumental for improving our paper. The main concern of the reviewers seems to be regarding lack of comparisons with existing Text-to-Image retrieval baselines and evaluations on additional datasets. We would like to stress that we indeed compare our retrieval method to an existing Text-To-Image (TTI) retrieval baseline (on VisDial), reporting results in lines 174-176 and 208-210. More specifically, the baseline we compare to is BLIP, since this is the publicly available SoTA model for TTI [1]. For a fair comparison we further fine-tuned this baseline model on the VisDial dataset for the TTI task (providing it with images and their captions only). We find that the retrieval performance of the fine-tuned single-hop text-to-image baseline, is nearly identical to our dialogue-trained model (63.66% vs. 63.61%). This corresponds to dialogues with 0 rounds in Figure 2a. However, using increasingly longer dialogues in our method, eventually achieves retrieval performance over 81% (Fig. 2a), showing a huge improvement over the single-hop TTI SoTA baseline. As for the choice of dataset, since VisDial is the only dataset containing image-dialogue pairs, we found it to be the most suitable dataset and benchmark for evaluating our method. However, due to the raised concerns and for further validation, we generated two synthetic image-dialogue datasets (using ChatGPT as a questioner, and BLIP2 [2] as an answerer) from two Text-To-Image (TTI) benchmarks, Flickr30k and COCO (see figures in the PDF attachment). We then compared our method to two TTI baselines, namely CLIP and BLIP, in a zero-shot setting (i.e., none of the compared methods have been fine-tuned on either dataset). We find that our method surpasses the two baselines by a large margin, on both datasets. Furthermore, when we provide the baselines with the concatenated text of the dialogues, instead of just a caption, they exhibit a significant improvement in success rate (please see attached figures) over the single-hop TTI attempt. Nevertheless, the gap in favor of our method is maintained (COCO) or increased (Flickr30k). Our zero-shot results on COCO and Flickr30k (in attached figures) show that: 1. Dialogues improve retrieval results for off-the-shelf TTI models. Although the CLIP and BLIP baselines have only been trained for retrieval with relatively short (single-hop) text queries, they are still capable of leveraging the added information in the concatenated Q&A text. Note that CLIP becomes saturated at a certain point due to the 77 token limit on the input. 2. Our strategy of training an Image Retriever model with dialogues (as described in Sec. 3) further improves the retrieval over the compared methods, raising accuracy from 83% to 87% at single-hop retrieval, and surpassing 95% after 10 dialogue rounds (on COCO). We believe the above addresses the main concerns in the reviews, and we welcome further discussion. References: [1] Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. In ICML, pages 12888–12900, 2022 [2] Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. CoRR, abs/2301.12597, 2023 Pdf: /pdf/d65efdefd3473e5180be6c3ee4d234e2c23e01a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning
Accept (poster)
Summary: This paper proposes a new relational world model which is at an attribute level. Based on the state knowledge from the environment or unsupervised object-centric representation (OCR) learning [1,2], they collected attribute-level object-centric knowledge, thereby, based on them, they learned a relational world model. In the aspect that their model can infer the detailed relationship between action and the specific attributes of objects (e.g., only change on the object position through “push” action, not changing the object’s color or shape) or the objects, their world model is more compositional and can be well generalized. They evaluated it through diverse generalization experiments on diverse three benchmarks, some of which give the ground-truth state information and others only give visual inputs. [1] SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. Advances in neural information processing systems, 29, 2016. [2] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. Advances in Neural Information Processing Systems, 33:11525–11538, 2020. Strengths: - This paper proposes a new attribute-level factorized world model for multi-object RL tasks. It is an excellent approach to step deeper from an object-wise world model such as C-SWM [1]. Through this, they can extend the compositionality to the attribute level. - They well discussed which model they designed and how to learn the learnable parameters to fit. - They evaluated their modeling with diverse world model methods and prior works which studied policy learning with unsupervised object-centric representation learning. - Through their evaluation, they can find their model outperforms baselines, significantly, when generalization is more required. - They also studied ablations to analyze which components are more important for the performance. [1] Kipf, Thomas, Elise Van der Pol, and Max Welling. "Contrastive learning of structured world models." arXiv preprint arXiv:1911.12247 (2019). Weaknesses: - Their model requires the class label to learn the class template graph through multiple objects in the class. It is reasonable to learn the template graph per class and it makes sense to reduce the complexity of the modeling and improve the performance by merging to learn a graph with multiple objects in the same class. However, it is an unrealistic assumption for real-world data. For example, if we applied this model to real-world data, then we should require additional labor to do labeling for each object. - Their model is assumed that the attribute can be accessible. Similar to the class label, it could be hard to be assumed now. For example, for AIR [1], it could be reasonable, because the AIR encoder returns position and size separately, but it is not for recent OCR models such as SA [2] or SLATE [3]. Recently, a few works have tried to represent the OCR through disentangled representation vectors [4], so we can expect that the attribute (maybe) can be given through the encoders. It is curious how you applied SA to your model. - Defining the reward function as an object level is too limited to cover general tasks. Sometimes the reward could be defined through the relationship between objects such as the object comparison task in the Image benchmark. It is curious how you made your model solve the task. In your modeling, reward estimation is learned through step 1 (Class Learning in single-object environments). - Action binding is unclearly discussed. In the section for DAFT-MDP, you assumed that an action has only an effect on one object at a time (line 74). However, action binding can affect multiple objects through soft attention (section 3.2.1), and I cannot find the details to regulate the action effect to only a single object from the detailed criteria (line 690 in Appendix). - Fine-tuning to a new multi-object domain is unclearly discussed. Which parameters are fine-tuned? Does Fine-tuning mean the parameters in the world model are frozen and only learning the parameters for the policy? - (minor) typos - In line 222, $i$ should be $m$. - In line 235, $o_i^t$ should be $o_1^t$. - In line 236, isn't the value $v^t = <f_v(o_1^t),...,f_v(o_m^t)>$? - In line 257, the input, isn't it $h^{t+1}_{(i,j)}$? - PPO is not MBRL, but it is mentioned as one of them in line 293. [1] SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. Advances in neural information processing systems, 29, 2016. [2] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. Advances in Neural Information Processing Systems, 33:11525–11538, 2020. [3] Singh, Gautam, Fei Deng, and Sungjin Ahn. "Illiterate dall-e learns to compose." arXiv preprint arXiv:2110.11405 (2021). [4] Singh, Gautam, Yeongbin Kim, and Sungjin Ahn. "Neural systematic binder." The Eleventh International Conference on Learning Representations. 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the ablation study (D.3.2 in Appendix), how did you represent the interaction except for the class template or dynamic interaction graphs? For example, when the action is to push the object, then how is the action represented? Can we see the ablation without the class template graph that is under the assumption that every object is in a single class? - What is TFD in line 852? - In [1], the Transformer encoder (similar to the self-attention that you evaluated, but using a learnable token for policy) shows good performances for the Image benchmark. Could you compare it with your model? - You utilized AIR [2] for the approaches relying on symbolic inputs. Could you test them with SA [3], not AIR? A few tasks are enough to evaluate due to the time limitation for the rebuttal. [1] Jaesik Yoon, Yi-Fu Wu, Heechul Bae, and Sungjin Ahn. An investigation into pre-training object-centric representations for reinforcement learning. International Conference on Machine Learning (ICML), 2023. [2] SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. Advances in neural information processing systems, 29, 2016. [3] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. Advances in Neural Information Processing Systems, 33:11525–11538, 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors didn't discuss their model's limitations. As I roughly discuss it, their modeling requires a class template that can restrict the application of their modeling, not just for collecting the class label, but also for their assumption for the class that in the class, the attributes and interaction patterns are shared. Another limitation is that it requires the encoder or environment must give the attribute-level representations. Because their model utilizes them as an input and the target of their objective, one-step prediction. Except for those limitations on their design, there is no ethical issue with their work I think, because it is fundamental research work rather than applications that can affect the real world. Additionally, they didn't discuss the computational overhead of their training or progressing their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback, it will surely improve our paper. We will answer your concerns in the following. **Requiring the class label** We first wanted to clarify a potential misunderstanding: we do not need labels for each new object, but we need to train a classifier that can classify the object’s class based on its extracted features, as explained in Appendix C.1.2. We will clarify the explanation in the main part of the paper. On the other hand, our current method would not work if an object of a new class appears. In that case, one solution could be to use off-the-shelf open-set recognition methods [1] to detect and categorize novel classes, followed by learning the template graph for this new class using collected trajectories. This is an interesting research direction and we intend to explore solutions to this limitation in subsequent research. [1] Geng et al. "Recent advances in open set recognition: A survey." IEEE TPMAI (2020). **Assuming that the attribute is accessible** We again wanted to clarify a potential misunderstanding: the attributes in this work are not predefined and we can use pre-trained object-centric models like SA to extract them from visual inputs. This means that there may not be a direct mapping between the extracted features and ground truth attributes, or a disentangled representation of the ground truth attributes when using SA. While using such encoders might lead to less interpretable class template graphs or interaction patterns graphs, our empirical results show that our framework still provides performance benefits. The direction you mentioned about disentangled representation vectors, as in the work of Singh et al. [4], is indeed promising and we will consider it in future work. **The reward could be defined through the relationship between objects** That’s a fair point. In this case, we can model the reward function as being affected by all objects. For the object comparison task in the Image benchmark, we indeed do not learn the sparse connections between the attributes and the reward in the class template graphs and instead assume that they are all fully connected. We will clarify this special case in the final version. **Unclear discussion of action binding** Thanks for your careful reading and feedback. While it is accurate that soft attention can distribute action effects across multiple objects, the real-world scenarios we considered and the graphs we learned from data show that in most of cases, only one object is affected significantly by a particular action. The subtle diffusion of effect to other objects via soft attention is generally negligible. Our choice for soft attention is backed by its inherent flexibility, demonstrated by previous work [1], especially in robotic manipulation tasks. To further elucidate the effectiveness of soft attention, we conducted a comparative study with hard attention networks, as described in [1]. The results are in Fig. 2 VIII of the rebuttal pdf in the general response, demonstrating that soft attention outperforms its hard counterpart. We hope this clears up the ambiguity surrounding action binding with soft attention in our work. [1] Biza, Ondrej, et al. "Binding actions to objects in world models." ICLR 2022 workshop on Objects, Structure and Causality **Fine-tuning to a new multi-object domain** The parameters of the world model are indeed frozen, and we indeed only need to learn the parameters for the policy. Just to be completely clear, this does not mean that the world model itself is frozen, since the objects are not frozen and we need to infer their latent parameters, although the class template graphs and interaction graphs are the same. **In ablations in D.3.2 how did you represent the interaction except for class template or dynamic interaction graphs** In the absence of the class template or dynamic interaction graphs, the interaction representation assumes a dense configuration. That is, every pair of objects interacts with each other, and this interaction remains consistent throughout the temporal spectrum. For situations without action binding, as illustrated in Case VI of Fig. 2 in our rebuttal pdf in the general response, the action node (e.g., push) is uniformly linked to all objects in the scene, implying that the action has the potential to affect every object. **Ablation with single class** We have conducted this ablation for the 3-Push+3-Switch (L+O+S) in Fig. 2 in the rebuttal pdf in the general response. The results, presented in Case V, show a considerable decline in performance, reinforcing the importance of class-specific distinctions in our model. We will include these results in the final version. **TFD in Line 852** Thanks for the pointer. It means the transformer decoder. We follow this design (introduced in NCS) only for Spriteworld and will explain this in detail in the final appendix. **Comparing with Transformer encoder [Yoon et al 2023]** Thanks for the suggestion, we tried to run this experiment but did not finish in time for the rebuttal. We will include it in the final version. **Using SA for approaches with symbolic inputs** We conducted preliminary tests using SA for the symbolic input-based approaches, particularly on the stacking benchmark with 8 blocks. This involved integrating SA with the SRICS, GNN, and LRN. From our experiments, we observed that models leveraging SA yielded slightly worse results to those employing AIR, except for LRN . Specifically, you can refer to the last row of Table 2 in the rebuttal pdf in the general response. The performance for using AIR is provided in the first row of the same table. We will expand our evaluation using SA across other tasks and benchmarks in the final paper. **Limitations and overhead** We will improve the discussion of the limitations of the method in the main paper, including the assumptions on the classes and attributes. We will also discuss the computational cost of our method. --- Rebuttal Comment 1.1: Title: Reply to the response of the author Comment: Thank you for your response to my concerns. They properly discussed my concerns.
Summary: This paper proposes a methodology to tackle compositional generalization, allowing the policy to be able to generalize to previous unseen combinations of objects or compose previously learned tasks. More specifically, the authors propose the Dynamic Attribute Factored RL (DAFT-RL) framework, which involves learning various graphs which model the objects and their interactions. Experimental results reveal superior performance relative to many baseline methods from the literature. Strengths: - Well-written. The paper is well-written, and the figures aid in the understanding of the methodology. - Strong experimental results. The proposed method is compared to a number of methods from the literature and outperforms all of them. - Ablations for major components. The paper includes ablations for the major components of the design, showing that all of the major components help in performance. Weaknesses: - Comparison to prior work. It is unclear whether the comparison to prior work is completely fair. For instance, an imagination component is added to the baselines -- could this hurt the performance of the baselines? Furthermore, Table A4 in the Appendix shows strong results for some of the baselines (SMORL, NCS) during training, outperforming the proposed methodology, potentially implying overfitting. Is the capacity of the baselines comparable to that of the proposed method? Finally, is it possible that by using SA / AIR or the ground truth latent vector z from the simulator for image-based methods, you are actually hurting performance of these methods which use a visual encoder other than SA / AIR? - Choice of tasks. The best performing methods apart from the proposed method (NCS and LRN) evaluate their approaches on a different set of tasks to the one used in this work. It is unclear why the authors chose to deviate from this choice of tasks. NCS [12] may be the SOTA and the most recently published, so it would be beneficial to compare to NCS under their settings. - Incomprehensive ablations. The proposed method seems to be rather complex, and it is not convincing that such a complex method is needed. While ablations have been provided for the utility of each component at a higher level of abstraction, more ablations on the design choices for each module (factored class template graph, dynamic interaction graph, factored interaction pattern) would be beneficial. For instance, how critical is the action binding in the dynamic interaction graph? How important are the design choices in the dynamic action binding network, e.g. the choice of using soft-attention networks? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The lack of the dynamic interaction graph leads to the biggest drop in performance in the ablations -- If there are two blocks and two switches, and the number of interactions is limited, why does modeling the dynamic interactions in a graph help so much? - How would an end-to-end approach do? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no discussion about the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the suggestions and feedback. We have run several experiments to answer your questions and we include a complete version of these analyses in the final version. **Effect of imagination on baselines** We conducted an additional ablation study without the imagination component for each of the baselines. Our findings in Table 2 of the rebuttal pdf in the general response, indicate that removing the imagination slightly improves the performance of SMORL and SRICS (still not better than our method DAFT-RL), but worsens the performance of the other baselines, e.g. NCS. **Good baseline performance during training without latent parameters or skill generalization** In simpler settings without latent parameters or skills generalization, some baselines like SMORL and NCS indeed showcase strong performances, in certain instances surpassing our method. On the other hand, the goal of our method is to improve compositional generalization capabilities, ensuring that learned policies can be applied across various skills and latent parameters. **Capacity of baselines (e.g. SMORL, NCS) w.r.t. to DAFT-RL** For the RL part, the number of parameters is almost the same, while for the model estimation part we use roughly 2-2.5x parameters compared to SMORL and NCS. We will add a detailed comparison in the appendix in the final version. **Ablation study in which we use the same (learned or ground truth) $ \mathbf{z} $** We conducted two additional experiments: (1) adding the ground truth latent parameters $ \mathbf{z} $ to the baselines; (2) adding the parameters $ \mathbf{z} $learned by DAFT-RL to the baselines. The results for the 8-block stacking task are in Table 2 in rebuttal pdf in the general response. They indicate that adding the ground truth $ \mathbf{z} $ to the baseline models unsurprisingly benefits the performance of all methods. Moreover, using the learned $ \mathbf{z} $ (extracted via DAFT-RL) improves the performance for all of the baselines. This is testament to the quality of the latent parameters derived by DAFT-RL. Additionally, when we compare with the baselines with access to the true latent parameters, the DAFT-RL approach with the learned $ \mathbf{z} $ still manages to outperform them. This underlines the effectiveness of the proposed framework in not just extracting meaningful latent parameters, but also leveraging them effectively in the task. **Choice of tasks - comparing with NCS under their settings** We run the comparisons with NCS on its original tasks “Robogym” and report the results below. We plan to report the complete results in the final version of the paper. We show that DAFT-RL has comparable performance to NCS in these settings. The preliminary results are given below (for all cells, the first one is ours and the second is NCS, and the number in parentheses is the std): | | 4 | 5 | 6 | 7 | |---|---|---|---|---| | Robogym (complete) | 0.60 (0.03)/0.64 (0.01) | 0.45 (0.02)/0.47 (0.01) | 0.46(0.04)/0.49 (0.01) | 0.38(0.03)/0.41 (0.01) | | Robogym (partial) | 0.43 (0.04)/0.47 (0.01) | 0.32(0.03)/0.33 (0.01) | 0.25 (0.04)/0.27 (0.01) | 0.19(0.04)/0.22 (0.01) | We also wanted to point out that in our setting, we wanted to explore objects of different classes (or types in NCS) with different types of attributes (e.g. boxes have position and velocity, switches have position and activation), factored class template graphs and interaction patterns, as well as the effect of latent parameters like mass and friction. These settings therefore extend the tasks considered by NCS. **Ablations for more design choices, e.g. action binding - soft attention networks** We conducted additional ablations to compare the effectiveness of soft attention networks against hard attention ones [1] for action binding. We also consider the case where there is no action binding in the system, which means the action will affect all objects. As captured in Case VI and VII of Fig. 2 in our rebuttal pdf in the general response, these studies underscore that action binding is indispensable for an efficient world model. Moreover, our empirical results clearly favor soft attention networks over their hard counterparts. For the other mentioned modules, our original submission has methodically outlined their ablation studies. For ease of reference, and to illustrate the method's robustness across scenarios, we have encapsulated the comprehensive findings pertaining to the 3-Push+3-Switch task in Fig. 2 of the rebuttal pdf. [1] Biza, Ondrej, et al. "Binding actions to objects in world models." ICLR 2022 workshop on Objects, Structure and Causality **Why do dynamic interactions help so much, even for two boxes and two switches** The interaction graph plays an important role in the model. Even with only two boxes and two switches, the six potential interactions can affect the state transition dynamics substantially, e.g. consider a collision between boxes. As expected and confirmed by our results in Fig A3 in Appendix D.3.2, the absence of an interaction graph leads to a more significant drop in performance as the number of objects increases (and as the number of symmetric interactions grows as n(n-1)/2). **End-to-end approach** We performed an additional ablation with an end-to-end version of our method, which combines Step 1, 2.1 & 2.2 into a single step and uses $\mathcal{D}^\text{multi}$ as the training set. We show the results for this method and baselines in Table 1 in the pdf of the general response. The end-to-end method is slightly worse than our multi-phase approach, but it is still comparable to the best baseline (NCS) across all tasks, as shown by the average. Notably, our end-to-end version is better than all other baselines on the more complex tasks like 2 Push+2 Switch (combination of skills S + changing latent parameters L) and 3 Push+3 Switch (S+L+ changing number of objects O). --- Rebuttal Comment 1.1: Comment: Thank you for such a detailed response. The reviewer appreciates all of the clarifications provided and the additional experiments conducted, particularly the one against the end-to-end approach and the imagination ablation. The experiment which compares the proposed methodology to NCS under the NCS settings is a little disappointing -- while the proposed methodology seems to clearly outperform NCS under the settings presented in the paper, the fact that this trend does not translate to the Robogym tasks from NCS seems to suggest that the gains are not very significant. While this still remains a weakness, the other experiments / clarifications showcased the strength of the proposed method in other ways -- considering all this, I am slightly increasing my rating to 5: Borderline accept. --- Reply to Comment 1.1.1: Comment: Firstly, we would like to express our sincere gratitude for your efforts and insightful comments, as well as your appreciation of our rebuttal and your reconsideration of our recommendation. We are pleased that we were able to address some of your concerns. Regarding the newly added Robogym tasks, it is true that we did not achieve a significant performance gap compared to NCS. However, as shown in the table, our performance is comparable to that of NCS. It is important to note that the primary focus of our paper is to solve environments involving multi-object interactions with unobserved physical latent parameters, specifically modeling a world with factored attribute-level interactions, such as the one in the modified OpenAI-Fetch environment. The new experiments in the Robogym tasks serve to demonstrate that our approach could achieve comparable results to one of the current state-of-the-art methods, NCS, in general multi-object environments with interactions, even in cases where there are no specific latent parameters affecting the dynamics. We are grateful for your suggestion, and we will include the results of the Robogym tasks in the appendix of our final version. Once again, thank you for your time and effort!
Summary: The authors propose DAFT-RL, which uses object-centric representations to improve generalization to object attributes, when training world-models for model-based RL. For each class of object they learn a class template graph that maps class attributes to dynamics and rewards. The authors claim that this representation outperforms state-of-the-art generalizing to unseen objects with new attributes on a simulated block stacking task. FYI: Supplementary Material PDF just links to the paper pdf for me. Maybe intentional? Strengths: - The object-centric attribute factorization is a reasonable approach when the number of attributes that affect environment behavior (dynamics and rewards) is small, or when there are multiple objects that share attributes. - On the benchmarks shown, the model does outperform baselines (although more on this in weaknesses). - I found the explosion easy to follow and the model section is well written (there's a fair bit of notation but at least it's clear). Weaknesses: - "e.g. an object’s position and reward are affected by its previous position, but not by its appearance or activation state" - nit: appearance and dynamics are not independent in the real world. A metal object appears metal. A rubber object appears rubber. One can infer a lot about dynamics from appearance. I get the authors point, but consider rephrasing for clarity. - "Often an object’s transition and reward functions" - Agreed but a reward for a given object is also task dependent. "Stack the red block" means the "blue block" has no reward (or even a penalty). Reword for clarity. - At a high level, the overall method is really just an extension to Object-Oriented (PO)MDPs to include object attributes. I'm not sure this crosses the bar for me for technical contribution. Adding attributes seems like an implementation detail that will help in certain applications, rather than a fundamental algorithmic contribution. - Experimental evidence is probably the weakest component. OpenAI Fetch, SpriteWorld and Block Stacking are all very simple simulated environments. I think the world-model RL folks have moved passed these and they stand more as unit tests than real RL problems. Real-world results would be nice, but not necessarily required. In particular I can see this method failing when the factored attribute graph is hard to infer or ambiguous (think - a "rubber duck" that is just painted metal to look rubbery). I don't see how these simulated environments is really stressing the generative model. - Likewise, the environments were largely chosen to result in the baselines not working, so I'm not entirely sure seeing margin there is all that compelling. i.e. choose a set of environments with varying attributes until baselines fail, then explicitly predict those attributes so that our model works, doesn't seem overly compelling as an experimental platform. If it was on a real-world experimental domain (with all the associated complexity of real data), that would be more compelling. - I also worry about how far this general approach scales. Assuming a known set of classes with known observable attributes isn't going to work for complex scenes. Likewise, how how can you generalize to a unknown attribute that is not within your set? The rigidity of the object classes and attributes feels perfectly tailored to simple simulated environments. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: None really. I didn't find any parts of the paper confusing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: Is there a limitations section? The "modelling assumptions" is perhaps closely related. There's also no societal impact statements, although I don't think it's necessary in this case given the scope of the experimental results (toy simulations). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, addressing it will help us make the paper better. We address your issues one by one in the following. **Supplementary Material** This was intentional, we combined the main paper and supplementary material in one document for the sake of readability during the review process, so one can click to get to the right Appendix. We will remove it for the final version of the paper. **Inferring the dynamics from the appearance** That’s a fair point, in many real-world scenarios appearance and dynamics will be strongly correlated, as you mention. We will rephrase and clarify the sentence by specifying that this is what happens in the benchmarks we consider. In particular, in these cases, once the object of interest is identified (“the red block”), its position (and hence in this case the reward) are primarily influenced by its position, and not e.g. by its color or texture. **Just an extension to Object-Oriented (PO)MDPs to include object attributes** We disagree with this point. First, following this logic, one could say that factored MDPs are just an extension of MDPs with factors/attributes, and hence also object-oriented MDPs are just an extension of MDPs with objects. Second, we think our approach neatly combines several ideas from disparate communities: starting from Factored MDPs and Object-Oriented MDPs, to dynamic Neural Relational Inference for modeling interactions, soft attention networks for action binding, object-centric representation learning for extracting attributes from pixels and latent parameters that vary across objects from factored adaptation literature. We also want to point out that our approach focuses on continuous states, while most Object-Oriented MDPs focus on discrete states. We have described the relation to these methods in detail in Appendix A. **Benchmark environments are too simple and not realistic** For our evaluation, we used similar simulated benchmarks from the state-of-the-art methods (like NCS, SMORL, SRICS), which also allows us to compare fairly. To the best of our knowledge, almost all of the related work is evaluated on similar, or even simpler simulated environments. We do agree that more realistic environments would be beneficial to the field, but we are not aware of any current realistic simulator/benchmark that would fit our setting. We would be happy if the reviewer would point us to a real-world setting, so we can use it in our final version. **The environments were largely chosen to result in the baselines not working** We disagree on this point. First, we proposed some modifications to the existing benchmarks, which we think are more realistic and in which each object has individual characteristics that can be latent, e.g. mass and friction coefficients for a task of stacking blocks. This is common in presenting a new idea or method, otherwise one could never introduce a new benchmark. Second, we had also evaluated the baselines on the original settings of the benchmarks (refer to Table 1, A3-A4). Our findings indicate that our framework performs on par with other state-of-the-art methods even in these standard conditions. **Unknown set of classes or unknown attributes** First, we would like to point out that we do not need to have a priori known or observable attributes, since the attributes are extracted by object-centric approaches at training time and we are also able to model latent per-object parameters. On the other hand, we do have these limitations in terms of unknown attributes or classes: - *Unknown attributes during testing*: In our current setup, the model will not inherently generalize to attributes that are not part of the training attributes. We plan to address this limitation in the future. One potential approach could involve leveraging active learning or open-set learning to enable the model to recognize and handle new attributes, even with limited exposure. - *Recognizing new object classes during testing*: For situations where a new class emerges during testing, we could consider using open-set recognition methods. These methods can identify when an object does not belong to any known class, prompting us to then gather more data on this object to learn its class template graph and interaction graph. - *Unknown classes*: Our ablation study on 3-Push+3-Switch (L+O+S) in Fig. 2 of the rebuttal pdf in the general response shows the importance of modeling known classes. In particular, “V: DAFT-RL with a single class” represents our method with a single class (which is the case if we cannot classify the objects), and shows a considerable decline in performance. This confirms that considering multiple object classes, even in a simplified environment, greatly aids in achieving policy learning. - *Scaling concerns*: We recognize the challenge of extending our approach to complex real-world scenes with many object classes and attributes. Incorporating more flexible representation learning techniques or utilizing unsupervised or semi-supervised learning might be potential directions to explore to make the framework more adaptable. **Limitations section** While there is no explicit limitation section, we will extend and improve the discussion of the limitations of the method in the main paper, including the assumptions on the classes and attributes mentioned in the previous answer. --- Rebuttal Comment 1.1: Title: Slight increase Comment: I thank the authors for their detailed feedback. I've read the rebuttal and the response to other reviewers. I think the additional points of clarification will make the paper stronger (and so I'll bump up my recommendation to weak-reject), but overall the technical contribution is marginal (and I'm not overly convinced by the rebuttal re: "extension to Object-Oriented (PO)MDPs") and I'm still unconvinced this will work on "real world" problems outside of the toy environments shown. "one could say that factored MDPs are just an extension of MDPs with factors/attributes, and hence also object-oriented MDPs are just an extension of MDPs with objects" There's false equivalency here. factored MDPs at least introduce a new concept. There's no new concept here, just an engineered extension to Object-Oriented (PO)MDPs. I don't find the core idea novel or surprising. "Second, we think our approach neatly combines several ideas from disparate communities: starting from Factored MDPs and Object-Oriented MDPs, to dynamic Neural Relational Inference for modeling interactions, soft attention networks for action binding, object-centric representation learning for extracting attributes from pixels and latent parameters that vary across objects from factored adaptation literature." Yes, I agree. It's a bunch of existing ideas put together in a not overly novel way. "We would be happy if the reviewer would point us to a real-world setting, so we can use it in our final version." This is a cop out. Look at literally any paper from CoRL 2022 (or any real-world robotics conference) if you need inspiration for what "real world" problems look like. There are many such benchmarks. You're also able to run baselines on a benchmark you propose, you're not limited to the set of environments the baselines were run on. I'm going to unfairly paraphrase the arguments above as: "look, there's precedent for other papers that only include simulated results". The problem I'm trying to convey is that your baselines didn't propose something that is a) clearly tailored to the set of simulated benchmarks you're evaluating on (with a scripted set of attribute classes that works only for those baselines) and b) is unlikely to scale to real-world problems (IMO). Therefore I'd say the burden is on this work to apply the method to a real problem and demonstrate that it outperforms the baselines on that real problem.
Summary: The paper presents a new framework for learning world models of environments with multi-object interactions. The presented DAFT-RL framework identifies objects as instances of classes, for which class template graphs describe how the object's dynamics and reward depend on it's attributes, and interaction pattern graphs describe how instances of different classes influence each other in terms of their attributes. The manuscript also presents how to learn a policy from this graph-based world model and shows that this policy can be adapted without additional policy learning to novel configurations by inference of object interactions and latent parameters. The proposed method is evaluated on several datasets and is shown to yield strong performance. Strengths: * To the best of my knowledge the proposed method is novel * Design decisions in DAFT-RL are well-motivated and the overall idea is well explained. * The performance improvements are impressive, especially in the environments with more objects. Weaknesses: 1. I think it would be a lot easier for readers to parse the examples in Figure 1 if they were annotated with example attributes and parameters, e.g. position, velocity, friction coefficient, etc. 1. I didn't see a description of how the hyper-parameters were selected. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 3. Line 66: I think it should say "Objects of the same class will have the same **set of**" to be as explicit as possible. I had to reread the sentence a couple of times. 1. Line 121: "some of the attributes of each object after the other". I think it should be "affect" instead of "after" 1. Line 797-798 (Appendix D.2.2): Is that two layers before and two after, or two, one of which before, the other after? ## Acknowledgement of rebuttal I have read the rebuttal and other reviews. The rebuttal addressed my concerns by describing how hyperparameters were selected, providing a discussion of limitations, and some ideas on how to address these limitations. The rebuttal has convinced me of the appropriateness of my accept rating. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: I did not see a discussion of limitations. I'd suggest discussing for instance 6. how the assumption that actions only have a direct effect on one object at a time limits the generality of the framework. 1. how the framework could be applied, if the sets of classes and attributes are not available Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, it will definitely improve our paper. We answer your questions inline below: > I think it would be a lot easier for readers to parse the examples in Figure 1 if they were annotated with example attributes and parameters, e.g. position, velocity, friction coefficient, etc. Thanks for the suggestion. Our original intention with Fig. 1 was to also introduce the notation, that’s why we had used the symbols. We also wanted to use it to clarify that the attributes are not predetermined, but that we can use any attribute extracted by OCRL methods like Slot Attention. On the other hand, using explicit parameters would help the readers to understand the setting better, so we will consider it, as well as other changes aimed at improving the clarity and readability of the paper. > I didn't see a description of how the hyper-parameters were selected. Thanks for the suggestion, we have reported the values in Appendix D.2.3, and we will add the hyperparameter selection procedure in our final version. In particular, for the GRU and attention networks, we optimized the number of hidden units by searching within the options {64, 128, 256}. The learning rate and batch size were set to commonly used values. As for the balancing parameters (sparsity regularization and KL divergence), we varied them from 0.1 to 0.9 in increments of 0.1. However, this was the approach for all tasks except step 1 in stacking. For step 1 in the stacking task specifically, our search range for these parameters was {0.01, 0.05, 0.1, 0.5}. > Line 66: I think it should say "Objects of the same class will have the same set of" to be as explicit as possible. I had to reread the sentence a couple of times. Thanks for the suggestion. We will clarify it in the final version as below: “Objects of the same class will have the same set of attributes, the same transition and reward functions, but can differ in the values of the attributes (e.g. they are at different positions) and in the value of the latent parameters (e.g. they have different friction coefficients).” > Line 121: "some of the attributes of each object after the other". I think it should be "affect" instead of "after" Thanks for pointing this out. We will correct it in the final version. > Line 797-798 (Appendix D.2.2): Is that two layers before and two after, or two, one of which before, the other after? We apologize for the confusion. It should be one before and one after, in total two layers. We will clarify it in the final version. > I did not see a discussion of limitations. I'd suggest discussing for instance 6. how the assumption that actions only have a direct effect on one object at a time limits the generality of the framework. Thank you for your suggestion, we will include a better discussion of the limitations of the method in the main paper, including the assumptions that we use for action binding, but also the assumptions that the classes are predetermined. We also wanted to point out that the assumption of an action having an effect on one object at a time is common in some robotic manipulation benchmarks like OpenAI-Fetch and it can be realistic in those scenarios. > How the framework could be applied, if the sets of classes and attributes are not available Thank you for raising this interesting point. We first wanted to point out that when combined with slot attention, our method automatically extracts the attributes from the representations, and is therefore able to learn templates and interaction graphs on top of these attributes. Instead, for our implementation with AIR, the attributes are predetermined and they represent position, velocity, size, and latent parameters. We wanted to clarify that we also do not need labels for each new object, but we need to train a classifier that can classify the object based on its extracted features, as explained in Appendix C.1.2. We will improve the explanation of this in the main part of the paper. In our current approach, we have not explicitly addressed situations where the sets of classes are not predefined or known. In this case, our method would revert to considering all objects of the same class and learn a class template graph and interaction pattern graphs that would be the union of all of the underlying graphs for each single (unknown) class. For completeness, we performed an additional ablation on 3-Push+3-Switch (L+O+S) and reported its results in Fig. 2 in the rebuttal pdf in the general response. In that case, “V: DAFT-RL with a single class” represents the case of using a single class for this environment, which seems to impact our current method quite strongly and therefore requires a rethinking of the approach for this setting. Extending our method to unknown classes is an interesting research direction that we plan to explore in subsequent research. For example, if, during testing, we encounter an object of an unknown class, we could employ an open-set recognition method, such as the one described in [1], to detect this novel class. Subsequently, it would be necessary to accumulate a few trajectories pertaining to this new object. This would allow us to learn its class template graph, interaction graph, and any associated latent parameters. [1] Geng et al. "Recent advances in open set recognition: A survey." IEEE transactions on pattern analysis and machine intelligence 43.10 (2020): 3614-3631. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions! I've read the other reviews and responses and will keep my score for now. My impression is that the method is quite complex and it'd be non-trivial to apply to more difficult settings, potentially limiting it's impact. Open-set recognition would further increase the complexity. I will keep an eye on all comments and discussions to see whether they convince me of the broader applicability of the proposed method.
Rebuttal 1: Rebuttal: We thank all reviewers for the insightful feedback. Your suggestions and questions for more experiments will improve the quality of the paper, while the misunderstandings will improve its clarity and readability, making it more accessible to a wider audience. We are happy about the positive feedback, mentioning that our paper proposes a “novel approach to an interesting problem” and “an excellent approach to step deeper from an object-wise world model”, that the design decisions are “well-motivated and the overall idea is well explained” and that the paper provides “strong experimental results”, which show that “the performance improvements are impressive”. We are also thrilled that some reviewers found it an “overall well written paper” and “easy to follow and the model section is well written”, and we will strive to improve its clarity in the final version also for the other reviewers and any future reader. We will also improve the discussion of the limitations of our method, as suggested by several reviewers. We wanted to clarify a few common points (which we will also clarify in the paper): we do not need predefined attributes for our method or labels for each new object. When combined with slot attention, our method automatically extracts the attributes from the representations. We also do not need labels for each new object, but we need to train a classifier that can classify the object based on its extracted features, as explained in Appendix C.1.2. On the other hand, in our current approach, we have not explicitly addressed situations where the classes are unknown, which is an interesting future direction. In the rebuttal pdf, we provide several ablations, which we describe in the following. **Table 1 - End-to-end** Several reviewers were interested in an end-to-end version of our method. We provide results in Table 1. DAFT-RL (End-to-end) combines Step 1, 2.1 & 2.2 into a single step and uses $\mathcal{D}^\text{multi}$ as the training set, so it does not require datasets with single objects. The end-to-end method is slightly worse than our multi-phase approach, but it is still comparable to the best baseline (NCS) across all tasks, as shown by the average on the last row. Notably, our end-to-end version is better than all other baselines on the more complex tasks like 2 Push+2 Switch (S+L) and 3 Push+3 Switch (S+L+O). **Table 2 - Imagination, true/learned parameters, SA** We run several ablations on block stacking with 8 blocks, in particular: - We switched off the imagination component for each baselines. Our findings in Table 2 of the rebuttal pdf in the general response, indicate that removing the imagination slightly improves the performance of SMORL and SRICS (still not better than our method DAFT-RL), but worsens the performance of the other baselines, e.g. NCS; - We used the latent parameters learned by DAFT-RL to the baselines, which also improves the performance of the baselines (but still not better than our method) and shows the quality of the latent parameters that DAFT-RL learns; - We used the ground truth latent parameters as inputs to the baselines, which unsurprisingly benefits all methods. Interestingly, the DAFT-RL method without access to the ground truth still outperforms the baselines with access to the ground truth. This underlines the effectiveness of our framework in not just extracting meaningful latent parameters, but also leveraging them effectively in the task. - We used Slot Attention for the symbolic input-based approaches, SRICS, GNN, and LRN and observed that models leveraging SA yielded slightly worse results to those employing AIR, except for LRN. **Fig. 1 - Quality of learned graphs** We investigated the quality of learned graphs and latent parameters, and their effect on the RL performance w.r.t. the number of samples for the 3-Push + 3-Switch (L+O+S) task. In particular, we varied the amount of training data, ranging from {10%, 20%, 40%, 60%, 80%} of the original sample size (900 trajectories). We measured: - the $R^2$ coefficient of our learned parameters with the true latent parameters, both for a random policy (Fig. 1a) and for a pretrained policy (Fig. 1b). These results show that the performance degrades with smaller sample sizes, but it is still acceptable with 60% data, and that the difference between data collected with a random or pretrained policy is negligible at higher sample sizes; - the normalized Structural Hamming Distance between the reconstructed graph and the ground truth graph, as well as the success rate of the learned policy. As expected, the more data, the more accurate the graph, and the better the performance of the policy trained with the more accurate world model. For the number of samples in the main paper, the graph is perfectly reconstructed, which also means a good RL performance. Additionally, for limited data (e.g. 0.2 or 0.4 of the original dataset) leveraging a pre-trained policy enhances both graph and policy learning. However, as the amount of data increases, the benefits of using a pre-trained policy diminish. **Fig. 2 - Single class, action binding design choices** We report the ablation studies from Appendix D.3.2 for 3-Push+3-Switch (L+O+S), improve the naming for I-IV to emphasize what the ablations are doing, and extend them with three additional ablations. All of these ablations show that our design choices have a positive effect on our method. In particular: – “V: DAFT-RL with a single class” represents the case of using a single class for this environment, which seems to have the strongest negative effect; – “VI. DAFT-RL with dense action binding” represents the case in which the action will affect all objects, which also substantially lowers the performance of our method; – “VII. DAFT-RL with action binding using hard attention network” represents a design choice in terms of hard attention vs soft attention, which also reduces the performances. Pdf: /pdf/428de23e63d64401e55bbb6fa1086b3ea6fc54f4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors has proposed Dynamic Attribute FacTored RL (DAFT-RL). For each class of object, class template graphs and interaction pattern graph and interaction graphs are computed. Through this template world model a policy can be learned for direct application in a new environment by just estimating the interactions and latent parameters. Authors showed that DAFT-RL outperforms the state-of-the-art in three compositional generalization benchmarks. Ablation studies Strengths: The paper brings in the philosophy of class template graphs and interaction pattern graph and interaction graphs for multi-object RL. This is overall well written paper with good supplementary material details. The results validate the claims. Weaknesses: No video demo or code is provided. The definition looks cluttered line 163. A pictorial example at the intro of a sample task is expected. (borrow from supplementary add here) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why the same paper content is repeated in supplementary upto pg 9? remove. I was wondering why Dynamic Bayesian Networks (DBNs) was chosen? How can standard knowledge graph and graph network theory approaches be leveraged in this context? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: How this method will leverage for multi object multi instance objects with ambiguity? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your useful feedback. We will answer each issue in the following. **Code** Following the NeurIPS policies, we will provide the code in an anonymized link as a separate message to the AC. **Clarity, visual examples** We will try to improve the clarity of the paper, including simplifying the definition and adding a visual example of a sample task. **Supplementary** This was intentional, we have combined the main paper and the supplementary for the sake of convenience in the review phase (so the links work). For the final published version, we will separate the supplementary materials. **DBNs, KGs** Dynamic Bayesian Networks are a standard way to model Factored MDPs and POMDPs (see for example the references [18-23]) and their extension, relational MDPs [24-26], which are the basis of our factored approach. In our setting, we use only one type of relation (as most MDPs) in a closed-world context, so we do not need knowledge graphs (which also do not have an obvious temporal aspect, at least in their standard form). Using graphical representations that are closer to first-order logic, might be an interesting future extension. **Multi-object multi-instance objects with ambiguity** In our setting, there is no issue of ambiguity in terms of separating the single objects. If we have multiple objects with the same attributes, we can still leverage SA or AIR to extract a representation for each object and then model them as individual objects in our framework. This is even more clear in the settings in which the objects have different latent parameters, e.g. mass or friction coefficients.
Summary: The paper presents DAFT-RL, which is a object-based dynamics model for reinforcement learning that learns factorized relationships between attributes of objects. These relationships are learned in several steps. First, single-object episodes are created using a random policy, and these are used to train a model of how the attributes of each object class interact with each other over time. These graphs are then held fixed, and multi-object interaction data is used to train additional graphs representing interactions between the attributes of each pair of classes, and a graph representing which object instances interact with each other. These learned graphs then help train a next-state dynamics model that predicts each object's attribute values conditioned on the relevant attribute values read off from the graphs. Finally, this model is used to either train a policy, or for a planning method like Model Predictive Control. DAFT-RL is evaluated in several experiments, where the number of objects and the latent factors of thos objects change. The paper highlights that DAFT-RL performs favorably compared to other object-centered reinforcement learning approaches on three simulated robotic benchmarks. Strengths: - The proposed model exploits object structure that exists in many RL problems to get better performance than a broad set of baselines. To my knowledge, this is a novel approach to an interesting problem. - DAFT-RL can generalize to environments with different numbers of objects than what was seen during training. - Ablation studies are done to show relative importance of each piece of the proposed approach Weaknesses: - Clarity of writing can be improved. In particular, Section 2 and Figure 1 were difficult to follow because the running example was not explained clearly. $C_1.s_3$ for example is not defined, and context suggests there are only two attributes of boxes. The appendix clears up the issues, so this may be fixable with some rewriting. - The multi-phase training seems a bit cumbersome. How realistic is it to be given interactions with single objects before observing multi-object interactions? - It is unclear how DAFT-RL would scale to scenes with many classes of objects, or objects with many attributes. Would the graphs become very difficult to learn? - One of the main contributions is the decomposition of object interactions to the interactions of individual attributes. Is there some ablation that could be done to analyze the benefits over modeling object-level interactions only? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What attributes are extracted from objects in the pixel observation case? - What might the latent parameters of objects be in pixel observation environments, and would it be difficult to learn meaningful latent representations from a small set of data? - How sensitive is the RL performance to the quality of the graphs extracted in the first two phases of training? On a similar note, how sensitive are the graph learning steps to the amount of data used to train them? I imagine there may be situations where it is difficult to get a full sense of the class template graphs with data coming only from a random policy. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper would improve from a more forthright examination of the limitations of the method. How does the method scale to more complicated tasks with more objects? What is holding the method back from being able to train end to end? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, it will surely improve our paper, especially in terms of clarity. We answer each issue in the following. **Interactions with single objects and end-to-end-training** We think that in many real-world cases, one would first train an agent in simple environments (e.g. single objects), before actually using it in complicated settings, similar to curriculum learning. Nevertheless, we performed an additional ablation with an end-to-end version of our method, which combines Step 1, 2.1 & 2.2 into a single step and uses $\mathcal{D}^\text{multi}$ as the training set. We show the results for this method and baselines in Table 1 in the pdf of the general response. The end-to-end method is slightly worse than our multi-phase approach, but it is still comparable to the best baseline (NCS) across all tasks, as shown by the average. Notably, our end-to-end version is better than all other baselines on the more complex tasks like 2 Push+2 Switch (combination of skills S + changing latent parameters L) and 3 Push+3 Switch (S+L+ changing number of objects O). **Scaling with many classes of objects, or objects with many attributes** Currently we are limited by the benchmarks we use, which consist of a restricted number of classes (e.g. 2 classes in OpenAI Fetch) and where each object has only attributes such as velocity, position, color and shape, While we extended the number of attributes by adding the latent parameters as mass and friction coefficients, we think using more realistic simulators with multiple types of objects and more complex attributes is an important next step for this research direction. As a proof of concept, we will include a numerical experiment in which we simulate multiple classes and large attribute sets with random graphs in our final version, and provide an analysis of the learned graphs. We do not expect the graphs to be difficult to learn, but that depends on having enough samples, similar to what shown in one of the following answers. **Ablation with only object-level interactions** We have conducted this ablation examining interactions only at the object level, and show an example in Fig. 2D, while we provide the results for all Push and Switch tasks in Appendix Fig. A3. In both figures, "DAFT-RL w/o factored interaction pattern" or “IV” refers to this scenario, where we use a dense (fully connected) interaction pattern graph. In all tasks, not modeling the factored interaction patterns worsens the performance. We will clarify the description of the ablations in the paper to make this clearer. **Attributes in the pixel observation case** When using slot attention, the extracted attributes are vectors that do not have direct physical interpretations. This allows us to deal with attributes that are not predetermined. When using AIR, the attributes are fixed and they are position, velocity, size, and latent parameters. **Latent parameters in pixel observation environments** In pixel environments, the latent parameters are properties of objects that are not directly observable from a single image, but influence the dynamics. In our experiments, we considered as latent parameters the mass of an object or its friction coefficient. To learn these latent parameters, we consider sequences of images, capturing the temporal dynamics and interactions of objects. We detailed the number of samples used for learning these latent parameters across benchmarks in Appendix D.2.3. We also performed an additional ablation, where we varied the amount of training data, ranging from {10%, 20%, 40%, 60%, 80%} of the original size (900 trajectories), and measured the $R^2$ coefficient of our learned parameters with the true latent parameters (a coefficient of 1 is perfect correlation). The results can be seen for 3-Push+3-Switch (L+O+S) in Fig. 1 in the pdf of the general response, where Fig. 1a shows the results for a random policy and Fig.1b shows the results for a pretrained policy. These results show that the performance degrades with smaller sample sizes, but it is still acceptable with 60% data. We also report the results for the pixel case for the random policy in the 8-block stacking task below (where the original trajectories are 700), with similar insights: | Ratio | 0.8 | 0.6 | 0.4 | 0.2 | 0.1 | |----------|------|------|------|------|------| | $R^2$ | 0.75 | 0.62 | 0.31 | 0.19 | -0.14 | **Quality of graphs - impacts on RL, sensitivity to small samples** The quality of the graphs, which represent our world model, directly impacts the RL performance. Our analysis of the learned graph for OpenAI Fetch in Appendix Fig. A4 indicates that the extracted graph closely mirrors the underlying physical processes, which helps the RL performance. We performed additional ablations on the graph estimation with lower sample sizes and reported them in Fig. 1(a) in the pdf of the general response. In these experiments, we varied the amount of training data for the model estimation stage, ranging from {10%, 20%, 40%, 60%, 80%} of the original sample size, and reported the normalized Structural Hamming Distance between the reconstructed graph and the ground truth graph, as well as the success rate of the learned policy. As expected, the more data, the more accurate the graph and the better the performance of the policy trained with the more accurate world model. For the number of samples we considered in the main paper, the graph is perfectly reconstructed, which also means a good RL performance. We also tested the impact of using a random policy to collect the trajectories vs. a pre-trained policy in Fig 1.(b). For limited data (e.g. 0.2 or 0.4 of the original dataset) leveraging a pre-trained policy enhances both graph and policy learning. This is likely because the pre-trained policy provides a more informative data sample compared to random actions. However, as the amount of data increases, the benefits of using a pre-trained policy diminish. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional ablation experiments. My concerns have been addressed.
null
null
null
null
Self-Chained Image-Language Model for Video Localization and Question Answering
Accept (poster)
Summary: This work builds a joint model for temporal language grounding and video question answering. The model is built on a state-of-the-art image-language model, BLIP-2, and finetunes it in a parameter-efficient way to derive a localizer and an answerer. The localizer finds language-aware keyframes in a video and the answerer uses these frames to predict the answer. The answerer is also used to generate pseudo-labels to refine the localizer. Experiments on multiple video question answering datasets in both finetuning and zero-shot settings show that the proposed method achieves state-of-the-art results and improves slightly over the BLIP-2 baseline. Strengths: - Leveraging image-language models for video-language tasks is an important direction, and localizing language-aware keyframes seems a good idea to achieve this. - The method achieves strong results on a wide variety of benchmarks in both finetuning and zero-shot mode. Weaknesses: - Most of the improvement compared to state-of-the-art methods come from the use of the BLIP-2 model. The proposed method only show small improvement over BLIP-2+concat in finetuning setting (see Table 1). - Can the method generalize with other image-language models, e.g. BLIP, and if so how does it perform? - The localizer relies on manual temporal language grounding annotations from QVHighlights. Could it benefit from weak supervision from e.g. HowTo100M like Moment-DETR? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and potential negative societal impact are discussed in the Supplementary Material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness 1** > Most of the improvement compared to state-of-the-art methods comes from the use of the BLIP-2 model. The proposed method only shows small improvement over BLIP-2+concat in fine-tuning setting (see Table 1). &nbsp; Our SeViLA consists of Localizer + Answerer, where both have BLIP-2 (visual encoder + Q-former + LLM) architecture, with separate Q-former parameters. Note that the ‘**BLIP2 concat**’ in Tables 1&2 is equivalent to our '**Answerer**' as a contribution that extends BLIP-2 to adapt to video. &nbsp; Our Answerer (BLIP-2 concat) performs better than the original BLIP-2 with single frame inputs (BLIP-2 voting). And our SeViLA which incorporates our Localizer with our Answerer further boosts the performance. In both fine-tuning and zero-shot settings (Tables 1&2, and as follows), our SeViLA outperforms the original BLIP-2 (BLIP-2-voting) with non-trivial improvements. | **Model** | **NeXT-QA (Avg.)** | **STAR (Avg.)** | **How2QA** | **TVQA** | |----|----|----|----|----| | (Zero-shot) | | | | | | BLIP-2 voting | 62.7 | 40.3 | 69.8 | 35.7 | | SeViLA | 63.6 (+0.9) | 44.6 (+4.3) | 72.3 (+2.5) | 38.2 (+2.5) | | (Fine-tuning) | | | | | | BLIP-2 voting | 70.1 | 51.8 | 79.6 | 54.5 | | SeViLA | 73.8 (+3.7) | 64.9 (+13.1) | 83.6 (+4.0) | 61.6 (+7.1) | ### **Weakness 2** > Can the method generalize with other image-language models, e.g. BLIP, and if so how does it perform? &nbsp; As you suggested, we experimented with extending SeViLA with another recent Image-Language model (MiniGPT4 [a]), and show the results as follows. We find our proposed self-chaining scheme can also improve the performance of zero-shot MiniGPT4, and we expect that finetuning the model will even improve the performance. | **Model** | **NExT-QA (Avg.)** | |----------------------------------------|--------------------| | MiniGPT4 Answerer | 52.7 | | MiniGPT4 Localizer + MiniGPT4 Answerer | 53.5 | Ref: [a] D. Zhu et.al. MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models. ArXiv pre-print, 2023. ### **Weakness 3** > The localizer relies on manual temporal language grounding annotations from QVHighlights. Could it benefit from weak supervision from e.g. HowTo100M like Moment-DETR? &nbsp; We’d like to note that our localizer brings 0.5% (62.4 in Table 3 row B v.s. 62.9 in Table 5) boost on NeXT-QA, without pretraining. Our localizer even outperforms the Moment-DETR pre-trained on HowTo100M + QVH in Table 6 (comparison with other localization methods). This shows that our temporal grounding method is already effective even without manual annotations. &nbsp; Following your suggestion, we also explored weakly-supervised pretraining using ASR similar to Moment-DETR. As shown in the following table, our Localizer performance improves with the weakly supervised pretraining, closing the gap with the pretraining with manual annotations (QVH). | **Localizer** | **NeXT-QA (Avg.)** | |--------------------------------------|--------------------| | w/o Localizer | 62.4 | | Moment-DETR | 62.0 | | Our Localizer (without pre-training) | 62.9 | | Our Localizer (weakly pre-trained with QVH ASR) | 63.2 | | Our Localizer (pre-trained with QVH) | 63.6 | --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a rebuttal, have read the other reviews, and stand by my original rating of weak accept. Below are detailed answers: Weakness 1: I agree with the authors that adapting BLIP-2 to videos by concatenating visual features output from the Q-former outperforms applying BLIP-2 separately to all video frames and using voting. However, I do not see this as a major contribution given that concatenating visual features from different frames before multi-modal modeling is a standard technique used in video-text modeling, see MERLOT for instance. Weakness 2: I appreciate seeing that MiniGPT-4 also benefits from the localizer for zero-shot NeXT-QA, and encourage the authors to include additional image-text models and/or other evaluation datasets to confirm the generalizability of the proposed localizer+answerer approach. Weakness 3: I appreciate seeing that the NeXT-QA accuracy improves with the weakly-supervised pretraining of the localizer on ASR. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and we are glad that you appreciate MiniGPT-4 and ASR results on NeXT-QA. We agree that concatenating visual features isn’t our primary contribution (although one interesting difference is that we do not require fine-tuning). We will add all these changes and clarifications to the final version.
Summary: The paper presents the SeViLA framework, which addresses the limitations of existing image-language models for video question answering. It introduces two modules, Localizer and Answerer, that are fine-tuned from a pre-trained image-language model. SeViLA utilizes chaining for cascaded inference and self-refinement. The Localizer identifies language-aware keyframes, and the Answerer predicts the answer based on these keyframes. Additionally, the Answerer generates keyframe pseudo-labels to refine the Localizer, eliminating the need for costly video moment annotations. Strengths: 1. The paper is well-written and easy to understand. 2. The concept of localizing and then answering is intuitive and effective. 3. Extensive experiments demonstrate that SeViLA achieves state-of-the-art performance on challenging video question answering benchmarks. The paper also provides a comprehensive analysis, including comparisons with other localization models and the impact of keyframe variations. Weaknesses: 1. The motivation and LGDN (which also addresses VideoQA tasks) are quite similar. While the paper mentions LGDN in the introduction and related work sections (which I appreciate), I suggest the authors further clarify the differences between the two approaches in the related work section. 2. The self-chaining approach requires additional computational resources (key frames need to be recomputed). The paper should provide information on the additional computational costs involved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness 1** > The motivation and LGDN are quite similar. While the paper mentions LGDN in the introduction and related work sections (which I appreciate), I suggest the authors further clarify the differences between the two approaches. &nbsp; While LDGN selects salient frames with two different modules an image model (ViT) and language model (BERT), and answers questions by a QA model (Transformer), with multiple training objectives, our SeViLA proposes a simple and unified self-chaining of a single image-language model (BLIP-2) to tackle both (1) keyframe localization task and (2) question answering task. We will add more clarification of this in the paper. &nbsp; As pointed out by Reviewer mWtS, leveraging of image-language model for video-language tasks is an important further direction, and as recognized by Reviewer TABX and b5LJ, our work shows a novel self-chained way to effectively use a single large image-language model for video-language tasks. ### **Weakness 2** > The self-chaining approach requires additional computational resources (key frames need to be recomputed). The paper should provide information on the additional computational costs involved. &nbsp; In the following table, we show memory, running time, and parameter size (ViT + FlanT5 + Qformer) comparison between our Answerer and our Localizer+Answerer. As the Localizer and Answerer share the most parameters, adding Localizer has a very small additional memory footprint. | **Model** | **Memory (GB)** | **Running Time (sec./sample)** | **Parameter (B)** | |----|--|--------------|-------------------| | Answerer (4) | 7.56 | 1.79 | 4.1 | | Localizer + Answerer (SeViLA, 32->4) | 7.98 | 3.28 | 4.2 | --- Rebuttal Comment 1.1: Comment: Thank you for your response! Most of my concerns are solved, so I would like to maintain my positive score. --- Reply to Comment 1.1.1: Comment: Thanks for your response and we are glad that our response has addressed your questions and that you have a positive rating!
Summary: The paper proposes a simple yet effective framework for video localization and question answering using pretrained image-language models. The key idea is to introduce a LOCALIZER module which localizes the keyframes in a video in order to ignore irrelevant frames and better answer the question. The LOCALIZER module is realized with a separate image-language model and the module can be further improved by fine-tuning on the pseudo labels generated by the QA results from the ANSWER module. The proposed framework, SEVILA, achieves state-of-the-art VQA results on multiple challenging benchmarks under both fine-tuned and zero-shot setups. Strengths: 1. The proposed framework is simple yet effective in extending image-language models to tackle video-related tasks such as video question answering and moment localization. The idea of using the LOCALIZER module to detect keyframes in a long video is technically sound and the results are convincing. 2. The paper provides extensive ablation study to analyze different components of the framework under different setups. E.g., the impact of selecting keyframes for VQA, the impact of self-refinement for the LOCALIZER module, and how the LOCALIZER compared with some existing keyframe localization methods. While there exists many keyframe localization methods, it's interesting to see the language-model-based method used in this paper gets comparable or even better results in the ablation study. 3. The paper is well organized, clearly written and the description of technical details is clear and easy to follow. Weaknesses: 1. The technical contribution of the work is relatively weak. The idea of selecting keyframes in a video for VQA is straightforward and widely explored in prior work. The proposed self-refinement strategy is also a standard semi-supervised learning approach and there's no new training strategies / techniques proposed for the fine-tuning of image-language models. 2. While I understand the main goal of this work is to utilize image-language models for VQA, the limitation of image-based models for video-related tasks should not be ignored. First, image-based models may not be sufficient to accurately localize the keyframes in a long video. Second, temporal modeling is also missing in the ANSWERER module, which makes the model less capable of modeling temporal dependencies compared with video-based models. These limitations may not be significant in the current VQA benchmarks since the questions are less sensitive to temporal dependencies and reasoning. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is it necessary to use two different Q-former for the LOCALIZER and ANSWERER? Since both models are image based, what if we share the same Q-former for both modules? 2. What if we make the self-refinement an iterative process? In other words, we refine the ANSWERER module after the self-refinement of LOCALIZER, and vice versa. It's interesting to see when the performance would saturate with such interactive refinement process. A minor suggestion: I don't think Figure 4 is a good qualitative example because (1) the action of the ladies is barely visible in the small figure, let alone the intention of the action; (2) the question can be answered by only checking the question itself without knowing the visual content, since "staring out" and "hands above their eyes" are explicated mentioned. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations of the work and its potential social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness 1** > The technical contribution of the work is relatively weak. The idea of selecting keyframes in a video for VQA is straightforward and widely explored in prior work. The proposed self-refinement strategy is also a standard semi-supervised learning approach and there's no new training strategies / techniques proposed for the fine-tuning of image-language models. &nbsp; We agree that keyframe selection in VideoQA is a popular topic as we mentioned in the related works (Line 94-102). Different from previous works [23, 40, 49] which link an image model (e.g., ViT) and a language model (e.g., BERT) to first select keyframes by vision & text similarity and then answer questions with a third QA model, we self-chained a single image-langue model to tackle both keyframe localization and question answering by prompting LLM. Our SeViLA shows the pre-trained large image-language model with Parameter Efficient Fine-Tuning (PEFT) is able to handle both video localization and question answering in a self-chaining style. &nbsp; As Reviewer mWtS pointed out, the image-LM for video-language tasks is an important direction since the image-LMs are improving rapidly, and our work shows an efficient way to leverage those image LMs for video-level tasks. Besides, as you mentioned, our work shows that image LM has a strong potential for video localization (Table 4 video moment retrieval result & Table 6 comparison with other keyframe localization methods). ### **Weakness 2** > First, image-based models may not be sufficient to accurately localize the keyframes in a long video. &nbsp; We’d like to note that multiple recent works [3,31] have also found the effectiveness and efficiency of Single-frame localizer in various long video QA benchmarks. In our experiment, as listed in Table 3 (and the below table), our single-frame Localizer demonstrates non-trivial performance boosts on a multitude of long video QA benchmarks. | **Model** | **NeXT-QA (Avg.)** | **STAR (Avg.)** | **How2QA** | **TVQA** | |---|---|----|---|---| | (zero-shot) | | | | | | Answerer-only | 62.4 | 42.2 | 70.8 | 36.6 | | SeViLA | 64.6 (+2.2) | 45.5 (+3.3) | 72.9 (+2.1) | 39.1 (+2.5) | | (fine-tuning) | | | | | | Answerer-only | 72.6 | 62 | 82.2 | 59.8 | | SeViLA | 73.8 (+1.2) | 64.9 (+2.9) | 83.6 (+1.4) | 61.6 (+1.8) | > the limitation of image-based models for video-related tasks should not be ignored. &nbsp; We acknowledge that the significance of temporal modeling for video-level tasks cannot be overlooked. In various sections, we emphasize its role in video-language tasks and suggest enhancements in temporal designs for the Localizer. In the Limitation (supp Line 136-139), we also point out the shortcoming and failure cases of the current single-frame Localizer. In Section 4.5 (Line 248-253) and Table 4 (results on video moment retrieval), we suggest more temporal designs for the Localizer to improve language-aware localization. Moreover, our analysis in Section 4.6 (Line 294-299), Table 7 (oracle performance analysis) and the Limitation (supp Line 136-139) indicates potential areas of improvement and underscores the importance of future research in temporal localization. &nbsp; Furthermore, in this rebuttal, we've expanded our Localizer to a multi-frame mode by concatenating frames into a long image **before** Q-Former. This allows for full attention across all frames, enabling temporal modeling. The results are the following, we find that the 4-frame localizer performs worse than the single-frame localizer in both zero-shot and finetuning settings. | **Answerer** | **# frames of Localizer** | **NExT-QA (Avg.)** | |----|----|----| | zero-shot | 1 | 64.6 | | zero-shot | 4 | 63.6 (-1.0) | | fine-tuned | 1 | 73.4 | | fine-tuned | 4 | 71.3 (-2.1) | > Second, temporal modeling is also missing in the ANSWERER module, which makes the model less capable of modeling temporal dependencies compared with video-based models. Please see the general response. ### **Question 1** > What if we share the same Q-former for both modules? &nbsp; Please note that Table 5 row 1 shows the setting where both Localizer and Answerer share the entire parameters, including Q-former. The model achieves 62.9% on NeXT-QA and 70.7% on How2QA, outperforming strong InternVideo and BLIP-2 voting baselines. We will also experiment with multi-task finetuning and discuss the results in the paper. ### **Question 2** > What if we make the self-refinement an iterative process? &nbsp; This is an interesting idea, thanks for your suggestion! We conducted experiments of iterative self-refinement and show the results in the table below. We found that self-refinement helps until two iterations and saturates afterward. We will incorporate this result in the paper. | **Iteration** | **NExT-QA (Avg.)** | |-----------|--------------------| | 1 (current) | 73.8 | | 2 | 74.2 | | 3 | 73.7 | ### **Minor Suggestion** > Qualitative visualization example &nbsp; Thanks for your suggestion on the qualitative example, we argue that even in this example visual content is necessary, because the correct answer "to see better" and not "to pose for photo" is only clear once the video shows that they are playing golf or there is no photographer. &nbsp; Please note that we have additional visualization examples in the supplementary Figure 3, where option events are all possible to happen without seeing videos and our model localizes the correct moment and gives the right answer. ``` Q: What did both of them do after completing skill? A1: jump A2: bend down A3: raise their hands A4: turn around A5: take off clothes ``` We will update the qualitative example in the main paper with the new example in the rebuttal PDF to show the effectiveness of our method better. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Most of my concerns are well addressed in the rebuttal and I'd keep my original positive rating. --- Reply to Comment 1.1.1: Comment: Thanks for your response and we are glad that our response has addressed your questions and that you have a positive rating!
Summary: This paper proposes a novel framework called Self-Chained Video Localization-Answering (SEVILA) that leverages a single image-language model (BLIP-2) for both temporal keyframe localization and question answering in videos. The framework consists of two modules, Localizer and Answerer, which are parameter-fine-tuned (LoRA finetuning) from BLIP-2. The paper demonstrates that the SEVILA framework outperforms several strong previous works in video question answering and event prediction benchmarks. Strengths: (1) The paper proposes a novel framework that addresses the issue of missing important visual cues in video question answering tasks by leveraging a single image-language model for keyframe localization and question answering. (2) The self-chaining mechanism in the framework allows for cascaded inference and self-refinement, improving the accuracy of temporal localization without the need for expensive annotations. (3) The paper provides a comprehensive analysis of the proposed framework, including the impact of temporal localization, the self-refinement process, and the number of keyframes. Weaknesses: 1. In the backward chain, the pseudo-tags are frame-level, and if a single frame of information can not give the correct answer, the Localizer is considered to have given the wrong keyframe. This is obviously unreasonable and ignores the important time dependencies provided by the intermediate frame. This is probably why there was no significant change in their performance when the self-refine part was removed in the ablation study. 2. the proposal is not good at dealing with temporal causality. Because Blip-2 is a image-text MLLM model, there is no space-time inference, and their answer is to directly add the q-former query generated by each frame to the word token of LLM. This is equivalent to letting q-former himself to realize the time level of information understanding, and q-former does not have this ability. Could you make me clear? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness 1** > In the backward chain, the pseudo-tags are frame-level, and if a single frame of information can not give the correct answer, the Localizer is considered to have given the wrong keyframe. &nbsp; We note that even allowing multi-frame in Localizer can also cause the misleading issue in the single-frame Localizer, since both one frame and short video clips have a chance to give incorrect pseudo labels. Because there is no frame-level annotation, this is a general issue for such a weakly supervised pseudo-label training scheme despite single or multiple frame(s). > This is obviously unreasonable and ignores the important time dependencies provided by the intermediate frame. &nbsp; **We still acknowledge the importance of temporal modeling** for video-level tasks. In Section 4.2.a (Line 190-193), we highlighted that temporal modeling matters for video-language tasks. In Section 4.5 (Line 248-253) and Table 4 (results on video moment retrieval), we suggest more temporal designs for the Localizer to improve language-aware localization. In Section 4.6 (Line 294-299) and Table 7 (oracle performance analysis), we show that better localizers further improve the video question answering accuracy, emphasizing the need for more future work in such temporal localization. In the appendix Sec. D Limitation (appendix Line 136-139), we also discuss the failure cases of the current single-frame Localizer. > This is probably why there was no significant change in their performance when the self-refine part was removed in the ablation study. &nbsp; Our single-frame Localizer can improve video QA performance even without self-refinement. As shown in Table 3 rows B, C, and D, our single-frame Localizer demonstrates non-trivial performance boosts on a multitude of long video QA benchmarks. | **Model** | **NeXT-QA (Avg.)** | **STAR (Avg.)** | **How2QA** | **TVQA** | |-------------------------------|----------------|-------------|-------------|-------------| | B - Answerer-only | 62.4 | 42.2 | 70.8 | 36.6 | | C - SeViLA+ (Localizer + Answerer) | 63.6 (+1.2) | 44.6 (+2.4) | 72.3 (+1.5) | 38.2 (+1.6) | | D - SeViLA (Localizer + Answerer + self-refinement) | 64.6 (+**2.2**) | 45.5 (+**3.3**) | 72.9 (+**2.1**) | 39.1 (+**2.5**) | &nbsp; Inspired by your suggestion, we also explored using a multi-frame localizer, but found that the single-frame Localizer works better than the multi-frame Localizer. As shown in the table below, we find that the 4-frame localizer performs worse than the single-frame localizer in both zero-shot and finetuning settings. We suspect that this is because our backbone model (BLIP-2) has not seen video data during its pretraining. We think that multi-frame Localizer could be more powerful once we have enough temporal grounding annotations. We leave the improvement of the multi-frame Localizer to future work. | **Answerer** | **# frames of Localizer** | **NExT-QA (Avg.)** | |----|------------------------|--------------------| | zero-shot | 1 | 64.6 | | zero-shot | 4 | 63.6 (-1.0) | | fine-tuned | 1 | 73.4 | | fine-tuned | 4 | 71.3 (-2.1) | ### **Weakness 2** > The proposal is not good at dealing with temporal causality. Because Blip-2 is an image-text MLLM model, there is no space-time inference, and their answer is to directly add the q-former query generated by each frame to the word token of LLM. This is equivalent to letting q-former himself to realize the time level of information understanding, and q-former does not have this ability. Could you make me clear? Please see the general response.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and valuable comments. We appreciate that reviewers recognized: - motivation of our localization+answering design (b5LJ, wzBa) - novelty of our self-chaining framework design (TABX) - strong experimental results (TABX, b5LJ, wzBa, mWtS) - extensive ablation studies and analysis (TABX, b5LJ, wzBa) - well-organized presentation, clear description (b5LJ, wzBa) - contribution in extending the image-language model for video-language tasks (b5LJ, mWtS). In the responses, we include the following discussion/experiments. - temporal modeling in both Localizer and Answerer (TABX, b5LJ) - our novelty compared with prior work (wzBa) - exp1: single-frame v.s. multi-frame Localizers (TABX) - exp2: Single model version - Localizer and Answerer sharing Q-former (b5LJ) - exp3: iterative self-refinement (b5LJ) - exp4: SeViLA with another Image-LM (MiniGPT4) (mWtS) - exp5: weakly supervised pre-training of Localizer (mWtS) **Please also find the attached PDF for an additional qualitative example (Reviewer b5Lj).** Below we answer to a common question about **Temporal modeling in Answerer** by Reviewer TABX W2 and Reviewer b5LJ W2 Reviewer TABX W2: > The proposal is not good at dealing with temporal causality. Because Blip-2 is an image-text MLLM model, there is no space-time inference, and their answer is to directly add the q-former query generated by each frame to the word token of LLM. This is equivalent to letting q-former himself to realize the time level of information understanding, and q-former does not have this ability. Could you make me clear? Reviewer b5LJ W2: > Second, temporal modeling is also missing in the Answerer module, which makes the model less capable of modeling temporal dependencies compared with video-based models. These limitations may not be significant in the current VQA benchmarks since the questions are less sensitive to temporal dependencies and reasoning. &nbsp; Our Answerer conducts temporal modeling with multiple frames via LLM rather than Q-former. As shown in Figure 2 and Line 147-149, Answerer Q-former does not take multiple frames at once but processes them into query features independently, and then those features are concatenated in a temporal order to feed into LLM with Frame ID hints. (Note that the ‘**BLIP2 concat**’ in Tables 1&2 is equivalent to our '**Answerer**' as a contribution that extends BLIP-2 to adapt to video). &nbsp; According to fine-tuning results in Table 1 (and in the table below), we find that LLM in our Answerer has the capabilities of temporal understanding with multiple frame inputs, since our Answerer performs better than the BLIP-2 voting, which outputs answers by majority voting with the original BLIP-2 taking a single frame input: | **Model** | **NeXT-QA (Avg.)** | **STAR (Avg.)** | **How2QA** | **TVQA** | |----|----|----|----|----| | BLIP-2 voting | 70.1 | 51.8 | 79.6 | 54.5 | | Our Answerer | 72.6 (+**2.5**) | 62.0 (+**10.2**) | 82.2 (+**2.6**) | 59.8 (+**5.3**) | &nbsp; Such a temporal understanding ability via LLM is evident in the case of the STAR-Sequence, a task demanding heavy temporal understanding, our Answerer outperforms BLIP-2 voting by a significant **13.1%** (fine-tuning Table 1) and **5.3%** (zero-shot Table 2). Those observations are also highlighted in Section 4.2.a where we discussed the importance of temporal modeling (Line 190-193). Pdf: /pdf/2ff0d67bd372e357bc1372c5de0221d552d49e1c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Parallel Sampling of Diffusion Models
Accept (spotlight)
Summary: The paper introduces a new method for speeding up (reduced latency) sampling of diffusion models by sampling all time steps in parallel from some initialization and then iterating this procedure until convergence (Picard Iteration). Modifications are made for implementation efficiency such as a sliding window. The method is evaluated on control tasks and large scale image datasets with a consistent reduction in latency at the expense of increased compute being reported. Strengths: This paper is very good. The problem being tackled is highly significant and I think the approach will be widely used to easily achieve a reduction in latency when extra compute is available. The method is clearly presented with a nice bit of theoretical analysis. The experiments on practically releveant datasets clearly show the benefits of the method. I highly encourage acceptance. Weaknesses: I think it would have been nice to pick a suitably small problem and fully explore the compute/latency tradeoff by varying the window size. I appreciate the current experiment in Figure 4 which does nicely show this trade-off in this compute intensive regime however the picture is slightly obscured by the necessity of also considering GPU flop/s saturation (which is an important consideration in its own right I agree). I think a simpler experiment where you can vary the window size all the way up to the number of denoising steps would benefit the paper where we can see a potential 20x reduction in latency if DDPM is integrated with 1000 steps and the Picard iterations converge in around 50 steps. For the experiments on StableDiffusion, it is not clear to what extent the method's benefits extend into the very low model evaluation regime. DPMSolver should be able to go to very few model evaluations and retain decent quality, in this regime, how does your method compare? It is ok if the model speedup is less in this case, but it would be good to know the extent to which the method can provide benefits. I think it would be good to stick with the terminology batch window size as you go through the experiments. When getting to section 4.2.1 it is a little confusing to start talking about batch sizes which may imply that you are generating entirely independent trajectories in parallel but I assume you still mean the batch containing the points in the window. Edit after rebuttal: I have read the author's response and my concerns have been addressed. I will keep my score of 8. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In proposition 2, how realistic is the assumption of the convergence rate of the Picard iterations in the context of integrating the probability flow ODE from a trained diffusion model? This has been investigated somewhat between the two experiments but what would you say is the interaction between the number of Picard iterations and the number of steps used to integrate the ODE. Do you believe there could be any relation where more steps could be helpful to reduce the number of total Picard iterations or are these completely orthogonal quantities? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The method is obviously more expensive than other diffusion models due to all steps being taken in parallel but this trade-off is well explained. Further, as explained above, the limits of when the method does not actually provide a speedup could have been better investigated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. &nbsp; > **pick a suitably small problem and fully explore the compute/latency tradeoff by varying the window size… vary the window size all the way up to the number of denoising steps.** Thank you for the suggestion. To better explore this tradeoff of varying window sizes, we use the Square task from Table 1 using a single GPU. This task uses a smaller diffusion model, so the effects of FLOPS saturation is less noticeable (but unfortunately still present for large window sizes). We see that the relative speedup increases up until a batch window size of 150, and then decreases as FLOPS saturation kicks in. Square Task | batch window size | samples/sec | relative speedup | Parallel Iters | |-------------------|-------------|------------------|------------| | 1 (sequential) | 1.09 | 1x | 1000 | | 50 | 9.07 | 8.3x | 76 | | 100 | 11.11 | 10.2x | 56 | | 150 | 12.02 | 11.0x | 49 | | 200 | 11.84 | 10.9x | 47 | | 300 | 11.40 | 10.4x | 44 | | 500 | 9.64 | 8.9x | 43 | | 1000 | 7.69 | 7.0x | 43 | For batch window size of 150, we see that (sequential steps) / (parallel iterations) = 1000 / 49, giving an over 20x theoretical speedup, which is similar to that of image models as well (L252 in the paper). Unlike image models, however, FLOPS saturation is less of an issue here, so we are able to get a 11x speedup when using a batch window size of 150. This highlights the fact that the relative speedup of ParaDiGMS can increase as the effect of FLOPS saturation diminishes with better hardware in the future. &nbsp; > **to what extent the method's benefits extend into the very low model evaluation regime? in this regime, how does your method compare?** Please see the shared response on “Speedup with few steps”. We present exciting developments where we implement custom multiprocessing for ParaDiGMS and are now able to see a 2.5x speedup on the most widely used setting of 50-step DDIM, and a 1.3x speedup even for 25-step DDIM. &nbsp; > **the limits of when the method does not actually provide a speedup could have been better investigated** Based on the new experiments, we see a drop in speedup for 25-step DDIM (1.3x) when compared to the speedup for 50-step DDIM (2.5x). This suggests that 25-step DDIM is roughly the limits of the current method on large models such as stable diffusion. On smaller models such as diffusion policy, we achieve large speedups even for 15-step DDIM. &nbsp; > **stick with the terminology batch window size as you go through the experiments. When getting to section 4.2.1 it is a little confusing to start talking about batch sizes** Thanks for the catch, we will be consistent in referring to it as the batch window size. &nbsp; > **In proposition 2, how realistic is the assumption of the convergence rate of the Picard iterations in the context of integrating the probability flow ODE from a trained diffusion model?** We believe that this assumption is very realistic in the setting of diffusion models, and empirically seems to be satisfied in the experiments we tried. One way to see this is that the tolerance used in our experiments to attain equal sample quality is much looser than the necessary tolerance stated in proposition 2. This suggests that the convergence rate in practice is much faster than the convergence rate stated in the assumption. &nbsp; > **what would you say is the interaction between the number of Picard iterations and the number of steps used to integrate the ODE. Do you believe there could be any relation where more steps could be helpful to reduce the number of total Picard iterations or are these completely orthogonal quantities?** Based on our exploration, empirically more steps for integrating the ODE generally leads to more Picard iterations for convergence. That being said, we think it may be possible that more steps can help reduce the number of Picard iterations, since more steps means that each step involves an easier prediction, which may translate to faster convergence. --- Rebuttal 2: Comment: Thank you for the response, the extra experiments with 50-step DDIM look really promising and this has alleviated my concerns. I will keep my score as it is and continue to recommend acceptance.
Summary: Instead of reducing the number of denoising steps, the paper proposes to parallelize diffsuion denoising sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence, which trades compute for speed. The authors then present ParaDiGMS to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel and verify its effectiveness. Strengths: - The idea is interesting and may promote another way for accelerated sampling. - The procedure of Algorithm is clear. Weaknesses: In total, I think there are still many problems for the paper. Please refer to the questions. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: **Q1**: Some notations may be a bit confusing, since the $x_T$ is always used to represent the initial noise of diffusion and the authors use $x_0$ to represent the initial point. **Q2**: For Algorithm 1, given the same denoising steps $T$, the proposed method is essentially similar to the existing techniques such as PNDM, DPMSolver. The existing high-order techniques should also cache the predicted score in previous time step, while the proposed method compute all the predicted score in a small window and use them for next selected window. **Q3**: The number of model evaluations: For all the tables, do you choose a different $T$ and use more parallel iterations compared to the baseline to match the sampling performance? If so, what is the value of $T$ ? If I am not wrong, $T = Model Eval / Para Iter$ ? The details are not clear. **Q4**: Following the prior question, how about setting a same $T$? Can parallel sampling improve the performance (e.g. FID)? This results could be important. If the performance remain the same, people can directly use a smaller denoising step $T$. **Q5**: For image generation, does $batch size = 80$ mean the author uses a batch window size of $80$? **Q6**: The evaluation results of image generation are not sufficient. How about the commonly-used FID for stable-diffusion? And more qualitative results should be shown and analysed. **Q7**: The algorithm seems not practical. As the authors said, this method may consume more resources. However, **Q4** is not clear. If the performance remain the same, more resources should be used to maximize sample throughput instead of parallel computing. **Q8**: Related works should discuss more parallelization techniques and the differences of between the proposed method and similar parallelization techniques. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. However, we believe that there is a misunderstanding as to the nature of our proposed method. ParaDiGMS is the first parallel sampling method for diffusion models: computing multiple denoising steps at the same time. This is fundamentally different from existing higher-order solvers, which uses multiple past timesteps to compute a single denoising step. It appears that Q2, Q3, Q4, Q7, Q8 stem from this misunderstanding. Please see the responses below. &nbsp; > **Q1: the authors use x_0 to represent the initial point** Yes, we make a note of this on L91 of the submission. &nbsp; > **Q2: For Algorithm 1, given the same denoising steps T, the proposed method is essentially similar to the existing techniques such as PNDM, DPMSolver** No, this is not the case. Our proposed method is fundamentally different from existing techniques such as PNDM, DPMSolver since our method denoises in parallel. In fact, since our method and existing techniques are addressing orthogonal issues, they can even be combined (e.g. ParaDiGMS + DPMSolver = ParaDPMSolver), as we demonstrate in our paper. To elaborate, these existing higher-order methods use multiple previous timesteps to predict a single denoising step. However, they are still sequential in nature, predicting one denoising step at a time. In contrast, our method predicts multiple steps in parallel, where each step can either be predicted using just the previous step (e.g. ParaDDIM) or using higher-order solvers (e.g. ParaDPMSolver). &nbsp; > **Q3: do you choose a different T and use more parallel iterations compared to the baseline to match the sampling performance? If so, what is the value of T? If I am not wrong, T = ModelEval/ParaIter? The details are not clear.** No, we use the same T when comparing sequential and parallel sampling. For example, for DDPM we use T=1000. The number of parallel iterations is lower than T because we are computing multiple steps in parallel. ModelEval/ParaIter does not equal T, but is rather (approximately) the batch window size, i.e., the number of steps we compute in parallel. The relationship is not exact because the batch window gets truncated near the endpoints of the trajectory. &nbsp; > **Q4: how about setting a same T? Can parallel sampling improve the performance (e.g. FID)?** We are already using the same T. We would not expect parallel sampling to improve performance since we are using the same T. The goal of our method is to improve sample latency. &nbsp; > **Q5: does batchsize=80 mean the author uses a batch window size of 80?** Yes, batchsize refers to “batch window size”. We will update this. &nbsp; > **Q6: How about the commonly-used FID for stable-diffusion? And more qualitative results should be shown and analysed** For stable-diffusion the commonly used metric is CLIP score since it is a text-to-image task. We show the CLIP score in Table 4 of our paper. For other tasks such as unconditional generation with LSUN, we show FID score in Table 5 of our paper. We show some qualitative results in Figure 4, where we see that the generated images are of high quality. We can include more image samples in the updated version of our paper. &nbsp; > **Q7: The algorithm seems not practical. As the authors said, this method may consume more resources. However, Q4 is not clear. If the performance remain the same, more resources should be used to maximize sample throughput instead of parallel computing.** It is unclear why the reviewer believes the algorithm is not practical. Our results show a very concrete 2-4x speedup in sampling latency across a range of diffusion policy and diffusion image generation tasks. The goal of our method is not to improve performance or sample throughput, but to improve sample latency. Our method requires more computation, but for many applications sample latency is critical and/or users are insensitive to the cost of compute. Please see the shared response on “Focus on latency” for more details. &nbsp; > **Q8: Related works should discuss more parallelization techniques and the differences of between the proposed method and similar parallelization techniques.** To our knowledge, our work is the first parallelization technique for diffusion model sampling, so there are no other similar techniques to compare with. &nbsp; We hope this addresses your concerns, and we are happy to answer any additional questions regarding our paper. Please consider updating your score in light of the initial review’s misunderstanding of our work. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: Thanks for the responses. Most of questions have been addressed. And I believe that the idea is interesting and can bring new direction to the diffusion sampling community. However, - Besides CLIP score, FID is also a common metric for text-to-image to evaluate the image quality. The evaluation is usually done in a zero-shot setting on text-image datasets of natural images such as COCO. The readers may concern about the image quality as well. - When it coming to the balance between latency & performance, one would expect higher performance with lower latency. As we known, reducing the number of sampling steps could be an obvious way to reduce the latency. Therefore, the comparison(e.g. FID、CLIP score) of different methods with the same latency is important, too. --- Reply to Comment 1.1.1: Title: Thanks for the responses Comment: Thank you for the additional comments. We're glad to hear that most of the questions have been addressed. - **Text-to-image FID score:** Sure, we can evaluate zero-shot FID score on text-to-image on the COCO dataset. We took 5k images from the validation set, and drew 5k samples with 100-step DDIM.\ \ Here is the table from before with an extra column for FID score (lower is better). We note that though the FID score for our parallel sampling method is slightly better, we believe this is only due to natural variations in the evaluation metric. \ \ Stable Diffusion v2-0 | Method | time (s) | CLIP (↑) | FID score (↓) 5k | Parallel Iters | |-----------------|--------------|------|------|----------------| | DDIM 100 steps | | | | | sequential | 5.34 | 31.9 | 25.0 | 100 | | parallel w/ tolerance 5e-2 | 1.96 (**2.7x**) | 31.9 | 24.4 | 19 | &nbsp; - **Latency & Performance:** Yes, we agree that when matching latency, our method should give better performance. We point to Table 5 in our paper, where it shows that 1000-step ParaDDPM gives both **better latency** (8.2s vs 12.2s) and **better performance** (12.9 FID vs 15.7 FID on 5k samples) when compared to sequential 500-step DDIM. &nbsp; We hope this addresses the additional comments from the reviewer. We're pleased that the reviewer believes the idea is interesting and can bring new direction to the diffusion sampling community, and hope the reviewer will consider updating their score if their concerns have been addressed.
Summary: The paper proposes an approach for speeding up sampling of diffusion models using Picard iterations. Multiple time steps in the sample process are predicted in parallel, iteratively refining until converging. Rather than refining all time steps at once which would not be practical, a sliding window approach is used: points in the window are refined then the window is moved once the later time steps have converged. Experimentally this approach is tested on diffusion policy learning and image generation benchmarks, where it is shown to provide substantial speedup over sequential sampling methods, while achieving comparable sample quality. Strengths: - Using Picard iterations to speed up diffusion model sampling makes sense and is useful in many low latency scenarios such as image editing. - Showing that in the worst case, the approach will converge faster at least as fast as sequential sampling is nice (Lines 127-137). Similarly, providing tolerance bounds is useful (Lines 161-164). - A good set of experiments/ablations on a variety of benchmarks are provided. The approach is clearly shown to be faster than sequential approaches, while providing similar quality samples. - The approach is tested with multiple diffusion samplers, including the recent DPMSolver, showing that it can provide speedup even to already fast solvers. Weaknesses: - It is mentioned that a more relaxed tolerance value can be used when determining convergence (line 166). It would be useful to see the effect of that value on sampling times/image quality. - Currently multiple GPUs are required to achieve net speedup on Stable Diffusion (Figure 4f). - The approach is only useful for reducing latency; if generating lots of samples, then sampling batches sequentially is more efficient. However, as mentioned in the strengths, there are many applications where this is useful. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In Algorithm 1 on line 9, new points are initialised from the latest point in the window. Would it make more sense to initialise the values based on the prediction of $x_0$? - What practically is the impact of using different tolerance values? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are well discussed in section 4.2.1 and section 5. It’s also worth noting that due to the extra compute, this approach will use more energy so has a negative environmental impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. &nbsp; > **It would be useful to see the effect of tolerance on sampling times/image quality. What practically is the impact of using different tolerance values?** Thanks for the suggestion. Here are additional experiments on a sweep over tolerance values for 200-step DDIM. We can see that the sample quality starts to degrade at a tolerance of 5e-1. We can use a tolerance of 1e-1 to maintain the same quality (or 5e-2 to be safe), and still achieve a sizable speedup compared to the sequential baseline. Stable Diffusion v2-0 | tolerance | CLIP | time | Parallel Iters | |-----------|------|------|------------| | DDIM 200 steps| | | | | sequential | 31.9 | 10.3 | 200 | | 5e-3 | 31.9 | 7.4 (1.4x) | 36 | | 1e-2 | 31.9 | 5.1 (2.0x) | 28 | | 5e-2 | 31.9 | 4.9 (2.1x) | 21 | | 1e-1 | 31.9 | 3.8 (**2.7x**) | 16 | | 5e-1 | 31.5 | 2.1 (4.9x) | 13 | | 1e-0 | 31.5 | 1.9 (5.4x) | 12 | &nbsp; > **Currently multiple GPUs are required to achieve net speedup on Stable Diffusion** Yes, multiple GPUs provide enough parallel computation to achieve net speedup on Stable Diffusion. &nbsp; > **only useful for reducing latency** Please see the shared response on “Focus on latency”. Our work indeed focuses on improving sample latency. &nbsp; > **New points are initialised from the latest point in the window. Would it make more sense to initialise the values based on the prediction of x_0?** This is a great suggestion! We tried some initial explorations on improving the initialization by extrapolating the trajectory using the prediction of x_0, but have not noticed consistent improvements over the current choice of copy initialization. The new suggestion is sometimes better but sometimes worse, so it may require more tinkering. In general, we do agree that the choice of initialization is a promising direction for further improvements, since the current choice of copy initialization is likely suboptimal. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks to the authors for their responses; it is great to see the improvement in the low step scenario, this definitely strengthens the approach making it applicable in more scenarios. Additionally, I appreciate the evaluation of the tolerance value, I found this very informative. And I thank the authors for the comments on different initialisation strategies, it is interesting to hear that it is not always helpful. After these comments and the responses to the other reviewers, I am happy to increase my rating to accept.
Summary: The authors propose a technique to reduce the time taken to sample from a diffusion model at the expense of using more FLOPs. Roughly speaking, the authors parallelize sampling by "guessing" xt at numerous values of t simultaneously, then computing the score function for each of these values of xt in parallel, and then repeating to refine the estimate of each xt over multiple steps. Strengths: - The paper is clearly written. - The proposed method is novel. - The proposed method is practically useful in situations where having low latency is important (e.g. iterative human-in-the-loop image generation). - The experiments are thorough, showing speed-ups on a range of domains and even showing speedups when efficient samplers like DPMSolver are used. Weaknesses: Overall I am satisfied that this paper makes a tangible contribution. I have a couple of questions though about certain scenarios in which it is not immediately clear that this method is useful. - I would be interested to see the performance of the latent image generation when less steps are used. E.g. for Stable diffusion (https://github.com/CompVis/stable-diffusion), the given example command "python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms " produces good images in only 50 steps, which is much less than the 200 used in Section 4.2.1. Does the proposed method still provide a gain in this setting? - How does this method interact with other techniques to speed up sampling like progressive distillation (https://arxiv.org/abs/2202.00512)? I appreciate that one benefit of the proposed method is that it avoids the need to have any sort of "distillation" training phase, but can the proposed method provide additional advantages when used in combination with a "progressively distilled" model? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. &nbsp; > **less steps are used.** Please see the shared response on “Speedup with few steps”. We present exciting developments where we implement custom multiprocessing for ParaDiGMS and are now able to see a 2.5x speedup on the most widely used setting of 50-step DDIM, and a 1.3x speedup even for 25-step DDIM. &nbsp; > **How does this method interact with progressive distillation** In terms of compatibility, our method can be used in combination with progressive distillation. The distilled models are sampled using sequential denoising, which can similarly be parallelized using ParaDiGMS. Since the distilled models use even fewer steps (e.g. 4 or 8), achieving speedups with parallel sampling may be more challenging, but we believe it may be possible with future improvements. In terms of differences, as you correctly point out progressive distillation requires retraining, whereas our method is an off-the-shelf sampling method that does not require any additional training. It is also important to note that distillation often leads to slightly worse sample quality, whereas our method is able to maintain the same sample quality. --- Rebuttal Comment 1.1: Comment: Thank you for the response - it addressed my comments and I will continue to recommend that this paper is accepted.
Rebuttal 1: Rebuttal: Thank you all for the helpful reviews. We are happy to hear that our contribution “brings new ideas to the diffusion sampling community”, is “practically useful”, and is “highly significant and…will be widely used”. We also appreciate that reviewers found the writing “easy to follow and overall well-written”, with a “good set of experiments/ablations on a variety of benchmarks”. &nbsp; First, we have some exciting developments regarding speedups for low-step sampling. **We optimized our implementation for multi-GPU inference, and are now able to sample 2.5x faster on 50-step DDIM on stable diffusion (details below)!** This is very exciting as 50-step DDIM is the most popular setting used for stable diffusion. &nbsp; Next, we discuss two common points raised by the reviewers. 1. **Speedup with few steps** \ \ First, we note that our method shows around 2x speedup even when using very few steps (15-step DDIM and 15-step DPMSolver) on the diffusion policy experiments. For diffusion policy, the models are smaller so a single GPU can handle parallel computation without much slowdown. \ \ Since stable diffusion is a larger model, the FLOPS saturation of the GPU plays a large role, so running with multiGPU is necessary. Our initial implementation using torch.DataParallel had a large overhead when using fewer steps, so we were unable to see any speedups for 50-step DDIM. Torch DistributedDataParallel also did not work since our use case is inference and not training. \ \ Recently, we implemented custom multiprocessing inference for ParaDiGMS, and we are able to see a substantial speedup (2.5x) for DDIM 50 steps! This is very exciting since, as the reviewers noted, 50-step inference is the most popular and the default setting for stable diffusion. \ \ As suggested by the reviewers, we run additional experiments to evaluate our algorithm on 100-step, 50-step, and 25-step DDIM for stable diffusion. \ \ We note that when using fewer steps, the tolerance level should be lowered to maintain the same sample quality. For 100-step DDIM, tolerance 5e-2 achieves the same sample quality with 2.7x speedup. For 50-step DDIM, tolerance 5e-2 achieves the same sample quality with 2.5x speedup. For 25-step DDIM, tolerance 1e-2 achieves the same sample quality with 1.3x speedup. \ \ Stable Diffusion v2-0 | Method | time | CLIP | Parallel Iters | |-----------------|--------------|------|----------------| | DDIM 100 steps | | | | | sequential | 5.34 | 31.9 | 100 | | tolerance 5e-2 | 1.96 (**2.7x**) | 31.9 | 19 | | tolerance 1e-1 | 1.68 (3.1x) | 31.6 | 17 | | DDIM 50 steps | | | | | sequential | 2.62 | 31.9 | 50 | | tolerance 5e-2 | 1.05 (**2.5x**) | 31.9 | 17 | | tolerance 1e-1 | 0.93 (2.8x) | 31.3 | 15 | | DDIM 25 steps | | | | | sequential | 1.31 | 31.7 | 25 | | tolerance 1e-2 | 0.99 (**1.3x**) | 31.7 | 17 | | tolerance 5e-2 | 0.76 (1.7x) | 31.4 | 13 | 2. **Focus on latency**\ \ As stated in the paper, and noted by many reviewers, the goal of our method is to improve sample latency but not throughput. For many applications such as interactive generation or real-time policy execution, sample latency is much more critical. Moreover, many users are insensitive to the cost of compute during inference, since the GPU demands during inference are much lower than that during training. Finally, we note that the reviewers are in agreement with us that “there are many applications where [improving latency] is useful”. &nbsp; We are extremely excited about the immediate benefits of ParaDiGMS – a 2.5x speedup on the default setting of stable diffusion. We are even more excited about the general potential of parallel sampling and the new avenue of research it unlocks for diffusion models. We fully agree with the reviewers on the significance of the idea that our paper introduces to the community, and are eager for future improvements on aspects such as discretization, batch window initialization, or multiprocessing optimization.
NeurIPS_2023_submissions_huggingface
2,023
Summary: Naive parallelization can let us generate multiple samples which improves throughput. However, the wall-clock time remains the same. To reduce the wall-clock time during diffusion model sampling, this paper proposes ParaDiGMS, a parallel sampling method of diffusion models based on Picard Iteration which improves latency. ParaDiGMS is compatible with classifier-free guidance and prior fast sampling methods. The experiment results show that ParaDiGMS can achieve a speedup without quality degradation. Strengths: 1. The authors provide a theoretical guarantee and error analysis of the proposed method. 2. The paper is motivated by the classical Picard Iteration which brings new ideas to the diffusion sampling community. 3. The paper is easy to follow and overall well-written which helps the reviewer understand the content. 4. The experiment results show that it achieves a speedup with no measurable decrease in quality. Weaknesses: The applicability of the proposed method is limited to certain situations. The total evaluations of ParaDiGMS are approximately twice that of the baseline method according to the experiments. Therefore, the proposed method may not be suitable for situations where maximizing sample throughput is a priority. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the section on experiments of diffusion image generation, the authors test ParaDiGMS on the settings of 1000 NFEs for DDPM, 200/500 NFEs for DDIM, and 200 NFEs for DPMSolver. It is questionable whether ParaDiGMS can achieve a similar speedup in fewer NFEs(like around 50 NFEs) since fewer NFEs are commonly used in downstream applications and the state-of-the-art samplers can achieve comparable results in this setting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. &nbsp; > **may not be suitable for situations where maximizing sample throughput is a priority** Please see the shared response on “Focus on latency”. Our work indeed focuses on improving sample latency. &nbsp; > **It is questionable whether ParaDiGMS can achieve a similar speedup in fewer NFEs** Please see the shared response on “Speedup with few steps”. We present exciting developments where we implement custom multiprocessing for ParaDiGMS and are now able to see a 2.5x speedup on the most widely used setting of 50-step DDIM, and a 1.3x speedup even for 25-step DDIM. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. The acceleration on 50 steps DDIM is exciting. However, the performance on Stable Diffusion evaluated by FID/Clip saturates fast. As can be seen in the table, 25 steps DDIM achieve a Clip score of 31.7, which is close to the performance of 50 steps DDIM, 31.9. Could the author provide quantitative results on LSUN church (as in your paper) to better show the speed-up of ParaDiGMS? --- Reply to Comment 1.1.1: Title: Comment Comment: Thanks for the suggestion! Here are quantitative results on LSUN church for 100/50/25 steps. We similarly compute FID score using 5k samples. \ ddpm-ema-church-256 | Method | time (s) | FID score (↓) 5k | Parallel Iters | |-----------------|--------------|------|----------------| | DDIM 100 steps | | | | sequential | 2.48 | 15.10 | 100 | | parallel w/ tolerance 1e-3 | 0.99 (**2.5x**) | 14.78 | 23 | | DDIM 50 steps | | | | sequential | 1.24 | 15.33 | 50 | | parallel w/ tolerance 1e-3 | 0.73 (**1.7x**) | 15.68 | 17 | | DDIM 25 steps | | | | sequential | 0.61 | 15.59 | 25 | | parallel w/ tolerance 1e-3 | 0.55 (**1.1x**) | 15.86 | 15 | &nbsp; We can see that indeed pixel space diffusion is more challenging than latent space diffusion, and we noticed that using a lower tolerance (1e-3) was more suitable for pixel space diffusion. In terms of FID score, our method matches the performance for 100 steps (actually is slightly better in this measurement), and is slightly worse for 50 and 25 steps. In terms of speed, for 100-step DDIM we see a 2.5x speedup, for 50-step DDIM we see a 1.7x speedup, and finally for 25-step DDIM we see a 1.1x speedup. This matches the trends from latent diffusion experiments, where we also saw the speedups drop at around 25 steps. To summarize these results, pixel-space diffusion is less forgiving in terms of using few-step sampling. Fortunately, 100-step ParaDDIM can provide a significant (2.5x) boost in speed while maintaining higher sample quality than 50 or 25 step DDIM.
null
null
null
null
null
null
SyncTREE: Fast Timing Analysis for Integrated Circuit Design through a Physics-informed Tree-based Graph Neural Network
Accept (poster)
Summary: This papers proposes SyncTREE that uses a bottom-up and top-down graph attention network with a Tree Contrastive Loss to predict the delay and slew for IC interconnects. Compared to other GNN methods, the proposed one achieve lower prediction error across synthetic and RISC-V benchmarks. Strengths: 1. The proposed method is quite novel and leverages the prior knowledge in timing analysis in network design and loss function design, achieving the best results on both synthetic and RISV benchmarks. Weaknesses: 1. The modified aggregation mechanism claims to preserve resistance information by linearly combining node and edge features. However, no ablation study or evidence on this technique has been shown. Similarly, the network designs, e.g., residual connections from bottom-up tree, two directed graphs, or other introduced techniques are not evaluated through ablation studies. It seems that the benefits are mainly from the directed graphs, not GAT. How does this method perform if applied to other GNNs listed in Table 3? 2. The layers in the trained GAT is fixed, not adaptive to different circuits. If a RC tree is deeper than the GAT network, then the timing information cannot be propagated from source to sink. The generalization raises some concerns in this case. 3. In line 266, it claims the proposed method achieves better accuracy on larger circuits. However, Figure 4(d) seems the relative prediction error get worse with larger circuit sizes if I understand correctly. Why the performance gets worse as circuit size changes is not explained. How to solve this generalization gap should be a key question to investigate for this work. 4. The runtime comparison need to clarify the hardware platform. If the GAT uses GPU, then it might be a unfair comparison if compared to CPU-version SPICE. There exists GPU version LU factorization to accelerate matrix inversion. 5. The prediction error lacks intuitive analysis. Whether 0.05 ps MAE considered to be large or small enough for timing analysis is not clear. In other words, the significance of the results is unclear. How does this delay and slew error ultimately affect critical path delay or total negative slack should be discussed. The variance in the predicted error on each node is also important compared to MAE. 6. In terms of generalization and transfer learning, transferring to different benchmark/circuit are expected instead of transfer from delay to slew prediction. How does the physics-informed loss function helps to increase data efficiency compared to pure data-drive method is not shown. The generalization and data efficiency are critical concerns for ML-based PDE solving tasks, which are not deeply explored in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major questions are summarized above. Basically, more ablation studies are required. More discussion on generalization to different circuits and data efficiency is expected. I will consider increasing the score if the major concerns above are addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer #YDvu Thank you for carefully going through the paper and pointing out so valuable questions. We hope these responses satisfactorily answer your questions: ### Q1. Ablation study. 1. Ablation study toward GAT modification. Following the reviewer’s suggestion, we evaluate our model with/without modification towards the GAT aggregation mechanism (GAT-Mod). The results are as follows: Table 1 MAE ps on Synthetic Dataset w/wo GAT-Mod | | 4 Layers | 8 Layers | 16 Layers | 32 Layers | 64 Layers | |-------------|---------:|---------:|----------:|----------:|----------:| | W-GAT-Mod | **8.745** | **6.631** | **3.775** | **3.424** | **3.556** | | WO-GAT-Mod | 9.752 | 6.589 | 4.886 | 4.597 | 4.507 | Table 2 MAE ps on RISC-V Dataset w/wo GAT-Mod | | 4 Layers | 8 Layers | 16 Layers | 32 Layers | 64 Layers | |------------|----------|----------|-----------|-----------|-----------| | W-GAT-Mod | **0.0313** | **0.0195** | **0.0128** | **0.0106** | **0.0176** | | WO-GAT-Mod | 0.0395 | 0.0325 | 0.0274 | 0.0271 | 0.0294 | 2. Other discussions. - For the residual connections from the bottom-up tree, if we remove this part, the model will degrade to conventional GAT working on a directed graph. In this case, the leaf nodes only can gather information from the propagation path, leading to the loss of high-level substructure features. - About applying two directed graphs to other GNNs, let us take the results in Table 2 and 3 as reference. Among all baseline models, GAT presents higher accuracy. That's why we use GAT as the basic block in our SyncTREE model. ### Q2. Concern about the depth of RC trees and the number of convolution layers. It’s true that if an RC tree is deeper than the GAT network, then the timing information cannot be propagated from source to sink. This is a typical issue of message-passing GNNs caused by its inner mechanism. A deeper graph structure might necessitate a deeper GNN architecture to capture long-range dependencies. To show the benefit of adding layers to deeper RC trees and the accuracy changes it brings to shallower circuits, we analyze the relative error regarding RC tree depth and size under different model depths. We attach this result in the pdf of the global response, please refer to Fig.1 and Fig. 2. ### Q3.1) Clarification about Fig 4d’s description; 2) Generalization problem. 1) We are sorry for our unclear description regarding Fig. 4d’s description. The results in Fig. 3 and Fig. 4 show that our approach demonstrates enhanced accuracy when applied to circuits of greater dimensions and more sinks. 2) To evaluate our model's performance thoroughly, we make two highly diverse datasets, as shown in statistics shown in Fig. 10 and Fig. 11 in our supplementary material. Our model is trained on datasets containing various RC circuits of different sizes and typologies. Compared with golden delay, the value range of golden slews is more limited, which makes the model tend to predict the small slew values very well as shown in Fig. 4d. However, delays are highly varied across circuit sizes, for example, delays of tiny circuits (eg. RC trees that have less than 4 nodes) are only at 1e-4 ps level, leading to higher errors for these circuits as shown in Fig. 3d. It should be pointed out that these tiny circuits are not common in normal IC designs but deserve to be investigated with our model. As for the results in Fig. 4d showing increasing error with larger circuit sizes, we believe the overall accuracy is still quite satisfactory considering the high diversity of our dataset. ## Q4. Running Platform Details. We report this question in the global response. ## Q5. 1) Interpretation towards MAE value; 2) Significance of our work; 3) Error impact on Critical path delay. We report problems 1 and 2 in the global response. 3) We follow the reviewer’s suggestion and calculate the critical delay MAE on both two datasets. The critical delay MAE is the average of absolute error between the golden critical delays and the predictions at the critical sinks across all circuits in the dataset. Please note the average critical delays on Synthetic dataset and RISC-V dataset are 183.25 ps and 0.68 ps respectively. | | 4L | 8L | 16L | 32L | 64L | |-------------------------|-------|-------|-------|-------|-------| | Synthetic Critical-MAE | 78.828| 59.976| 32.004| **24.837** | 28.661| | RISC Critical-MAE | 0.4034| 0.2170| 0.1311| **0.1214** | 0.1283| From the result, we can observe that the critical delay MAE is closely related to overall MAE. ## Q6. 1) Transfer learning to different benchmarks/circuits; 2) The advantage over data efficiency of Physics-informed loss function. 1) Our model is trained on datasets containing various RC circuits of different sizes and typologies, which means that our model doesn’t need to be trained differently regarding different circuit sizes. Besides, our two benchmark datasets are from different sources, thus we have different hidden dimension settings to optimize the accuracy correspondingly, so it's not feasible do transfer learning to different benchmarks due to the model difference and dataset inconsistency. 2) In terms of data efficiency, we implement an additional experiment to evaluate the possible benefit of TC loss under different training set percentages. | | Synthetic-W-TC | Synthetic-WO-TC | RISC-V-W-TC | RISC-V-WO-TC | |--------|---------------|----------------|-------------|--------------| | 25% | 4.2838 | 4.5682 | 0.0311 | 0.0342 | | 50% | 4.0534 | 4.0246 | 0.0166 | 0.0228 | | 75% | 3.9063 | 3.8453 | 0.0149 | 0.1912 | From the result, we can observe that SyncTREE with TC loss exhibits obvious benefits over data efficiently on the RISC-V dataset. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. The authors mostly addressed my questions. The overall prediction error in terms of MAE versus the average critical delay is still not small enough for accurate timing prediction, which needs further improvement. I will increase the score to 6. --- Reply to Comment 1.1.1: Comment: Dear reviewer #YDvu, We are pleased that our rebuttal addressed your concerns. We greatly appreciate your recognition of our work!
Summary: This manuscript present a SyncTree to speed up timing analysis in IC design. Strengths: -The problem that the manuscript is trying to address is important (increase speed of timing analysis) -Evaluation and comparison to other machine learning based methods. Weaknesses: - It is unclear are the Mean Average Error with respect to Spice simulation? - Table 2 and Table 3 should consist of time it takes to perform the predictions. - Additionally, what is the increase in speed between SyncTree and spice simulation. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Check Weakness section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Check Weakness section Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer #sZvh Thank you for your comments and the time spent reviewing the work. We try to address all the raised points in the following content. ### Q1. Clarification about MAE. In our experiments, we treat the SPICE simulation measurements as the golden timing results. The Mean Average Error (MAE) is measured between predictions and spice simulation results. ### Q2. Running time of results in Table 2 and Table 3. We additionally record the inference time (s) of results in Table 2 and Table 3 as follows: 1. Table 2 | | GCN | GAT | GraphSAGE | DeepGCN | GraphTrans | NTREE | SyncTREE | |----|--------|--------|-----------|---------|------------|-------|----------| | 4L | 0.003 | 0.005 | 0.002 | 0.008 | 0.060 | 0.024 | 0.015 | | 8L | 0.005 | 0.009 | 0.004 | 0.015 | 0.055 | 0.038 | 0.025 | | 16L| 0.011 | 0.020 | 0.007 | 0.029 | 0.059 | 0.072 | 0.044 | | 32L| 0.019 | 0.039 | 0.013 | 0.055 | 0.062 | 0.114 | 0.100 | | 64L| 0.036 | 0.071 | 0.025 | 0.112 | 0.056 | 0.233 | 0.185 | 2. Table 3 | | GCN | GAT | GraphSAGE | DeepGCN | GraphTrans | NTREE | SyncTREE | | --- | ------ | ------ | --------- | ------- | ---------- | ----- | -------- | | 4L | 0.007 | 0.008 | 0.006 | 0.018 | 0.101 | 0.057 | 0.035 | | 8L | 0.010 | 0.015 | 0.008 | 0.033 | 0.099 | 0.110 | 0.051 | | 16L | 0.017 | 0.031 | 0.015 | 0.070 | 0.085 | 0.193 | 0.097 | | 32L | 0.033 | 0.060 | 0.029 | 0.131 | 0.084 | 0.245 | 0.162 | | 64L | 0.0659 | 0.1172 | 0.0665 | 0.269 | 0.087 | 0.317 | 0.294 | ### Q3. Improvement in speed of SyncTREE. As shown in Fig. 7, we evaluate the computation efficiency of SyncTree model and SPICE simulation along with the circuit size. It shows that SyncTREE is significantly faster than SPICE with this advantage increasing as the circuit size grows. We further analyze the computation complexity towards SPICE and SyncTREE in the Computation Efficiency Section of Results and Discussion Part. To conclude, SPICE needs to solve DAEs in a time-incremental manner to simulate the circuit behavior, which involves intensive matrix operations like LU decomposition. In contrast, SyncTREE leverages Graph Attention Networks which only revolve around linear transformation, leading to a much faster running speed. --- Rebuttal Comment 1.1: Comment: Dear Reviewer #sZvh, Thank you so much for your time and efforts in reviewing our paper. If any sections of our rebuttal you felt unclear, could you kindly point out them? We will appreciate your invaluable insights. Thank you again. Looking forward to your continued guidance. --- Rebuttal 2: Title: Respond to authors' rebuttal Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. --- Rebuttal Comment 2.1: Title: Reminder Comment: A reminder of this.
Summary: This paper proposes a GNN based method that specializes in timing analysis. Strengths: * This paper is well written and organized, easy for readers to follow. * Related background, related work that uses GNN on circuits are discussed with sufficient level of detail. * The core problem looks well formatted. * Experimental results looks promising with benchmarks on RISC-V, offering faster than SPICE simulation looks appealing. * Performance and results are thoroughly analyzed. Weaknesses: Please see questions Technical Quality: 3 good Clarity: 3 good Questions for Authors: * For different circuit technology sizes, does the proposed method need to be trained differently? or is the technology sizing part of the input? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * There is no explicit discussion on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer #6pFT Thank you for your feedback and support! We hope these responses satisfactorily answer your questions: ### Q1. Model adaptation to different circuit size. In this paper, our motivation is to devise a timing prediction model that can be applied to different circuits by leveraging the representation capability of Graph Neural Networks for unstructured data. Our model is trained on the dataset containing various RC circuits of different sizes and typologies, which means that our model doesn’t need to be trained differently regarding different circuit sizes. --- Rebuttal 2: Title: Respond to authors' rebuttal Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. --- Rebuttal 3: Comment: Dear reviewer #6pFT, Thank you very much for your recognition of our paper. If any parts of the rebuttal you felt were unclear, could you please kindly highlight them? We will be very appreciative of your insights.
Summary: The paper proposes a GNN model, dubbed SyncTREE, for IC's RC-tree timing analysis. Two techniques are proposed: 1) two-pass message-passing and 2) Tree Contrastive loss. Experiments of two IC designs demonstrate the best accuracy of SyncTREE over other SOTA GNN models. Strengths: 1. Domain-specific knowledge is used in SyncTREE's two techniques. I think these two techniques are original. 2. Evaluations demonstrate the performance of SyncTREE's two techniques. The timing analysis shows the promising potential speedup of SyncTREE over traditional SPICE with increased circuit sizes. Weaknesses: 1. There is no clear guideline on setting the number of hidden dimensions and layer selection, which can influence accuracy and speed trade-offs for the practical adoption of the proposed SyncGNN. 2. I'm pondering if an EDA-focused conference like DAC or ICCAD may be a more suitable platform for this paper. Given that the development of SyncGNN draws heavily on EDA-related domain-specific knowledge, such a conference could provide a more targeted audience and potentially foster more fruitful discussions for further refining this approach. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I am confused by the middle part of Fig. 2 w/ "Init Nodes". The hidden features from the bottom-up graph are used to initialize those in the top-down graph. Why there is an arrow from the top-down graph to the "Init Nodes"? In addition, the right-most part of Fig. 2 shows a "mask", which is given w/o explanation. 2. What is the rule of thumb for setting the number of hidden dimensions? In addition, it would be better if the authors can discuss the selection of the number of layers. These hyperparameters can influence the accuracy and speed trade-offs and a discussion can potentially ease the use of the proposed SyncTREE. 3. Fig.7 provides the computation time vs. circuit size. What are the hardware platforms for SPICE simulation and SyncTREE model? Since given a new circuit, new graph training is needed. It would be better if the author can also provide the training time vs. circuit size for better show the efficiency of the proposed SyncTREE. 4. Typos. E.g., "news ways" should be "new ways" in the Abstract section. Line # 216, "We" should be "we". 5. Considering that timing analysis is an important problem for EDA community and considering existing efforts to use GNN for EDA tasks, can conferences such as DAC and ICCAD be a more suitable platform for publishing this paper? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors do not discuss the limitations of the proposed method. It would be better if the authors can provide the limitations and the potential future directions of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer #ZUNz We greatly appreciate your careful and detailed review. Here are some points we would like to clarify: ### Q1. Explanation of “Init Nodes” & “mask” in Fig. 2. The "Init Nodes" part of Fig. 2 involves the node attributes assignment of the top-down graph after each convolutional layer. In our two-pass message-passing mechanism, as shown in Equation (5), the node attributes in the top-down graph at $l+1$ layer will be updated by two parts: 1) the node representations obtained by the $AGG^{l}_{td}$; 2) the final node embeddings of the bottom-up graph. With this update mechanism, the node attributes will be reassigned accordingly for the next layer. The mask in Fig. 2 is used to filter out leaf nodes for calculating the loss function and obtaining the final output. Since our task is to predict propagation delay/slew of RC trees, we take the final representations at leaf nodes (sinks) as the prediction results. That's why we apply masks to improve the model performance by reducing the influence of irrelevant information in the loss function. ### Q2. Rules regarding setting hidden dimensions and the number of convolution layers. The hyperparameter selection of hidden dimensions and the number of convolution layers directly determine the model performance. A lower hidden dimension might lead to underfitting, leading to the failure of the model to capture complex patterns in the graph data. Conversely, a higher hidden dimension could lead to overfitting. Besides, the number of convolutional layers in GNNs determines the depth of information propagation across the graph. To choose an ideal number of convolution layers, the depth of input graphs is a crucial factor to consider. In our work, the choice of hidden dimensions and the number of convolution layers is decided by the trade-off between model accuracy and efficiency. By systematically exploring different parameter combinations and analyzing the model’s behavior, we then make an informed decision about the appropriate number of layers and dimensions to apply. Following the reviewer’s suggestion, we evaluate the inference time and model accuracy on the Synthetic and RISC-V datasets under different parameter combinations. The results are shown in the following tables (Note: 4L refers to 4 convolution layers, 32D refers to 32 hidden dimensions). Table 1 MAE ps (Inference Time s) on Synthetic Dataset (batches: 31, batch size: 32) | | 4L | 8L | 16L | 32L | 64L | |--------|------------|------------|-------------|-------------|-------------| | 32D | 8.745 (0.010) | 6.631 (0.018) | 3.775 (0.035) | **3.424** (0.068) | 3.556 (0.141) | | 64D | 9.075 (0.014) | 7.258 (0.026) | 4.886 (0.051) | 4.662 (0.102) | 4.563 (0.204) | | 128D | 9.564 (0.022) | 6.986 (0.041) | 4.737 (0.079) | 4.419 (0.157) | 5.106 (0.314) | Table 2 MAE ps (Inference Time s) on RISC-V Dataset (batches: 647, batch size: 128) | | 4L | 8L | 16L | 32L | 64L | |-------|-----------------|-----------------|-----------------|-------------------|------------------| | 32D | 0.0385 (0.017) | 0.0569 (0.021) | 0.0213 (0.039) | 0.0145 (0.075) | 0.0244 (0.136) | | 64D | 0.0352 (0.021) | 0.0271 (0.034) | 0.0149 (0.056) | 0.0120 (0.103) | 0.0236 (0.202) | | 128D | 0.0313 (0.033) | 0.0195 (0.054) | 0.0128 (0.096) | **0.0106** (0.160) | 0.0176 (0.312) | Given the above statistics, we then separately set 32 hidden dimensions and 128 hidden dimensions for our model over Synthetic and RISC-V dataset. ### Q3. 1) Hardware platforms running SPICE & our experiments; 2) Plotting training time vs. circuit size. 1) All experiments in this paper are implemented with PyTorch 1.13.1 and PyTorch Geometric 2.2.0 frameworks, and executed on a Ubuntu server equipped with Intel Xeon Gold 6330 CPU with 56 cores/2 threads running at 2.0GHz. The reference SPICE simulations are carried out with the commercial Synopsys HSPICE simulator on an AMD Ryzen 3950X with 16 cores/32 threads at 3.5GHz. 2) As shown in Table 1, Fig. 10, and Fig. 11, our training and validation dataset is composed of RC trees with different sizes and typologies. During model training, all circuit samples are shuffled randomly to reduce bias and improve generalization. Technically, a training batch is composed of various circuits of different sizes and typologies, so it's hard to get the exact training time regarding specific circuit sizes. Moreover, since graph neural networks can deal with unstructured data, if given a new circuit, we don't need to retrain the model. Actually, in the inference stage of our model, the input circuit graphs are new and not seen during training. ### Q4. Typos correction. We appreciate your keen attention to pointing out the typos and apologize for any oversight on our part. We will correct the errors you mentioned in the revision. ### Q5. A more suitable platform for this paper. We appreciate your thoughtful consideration of the paper's relevance to different conference platforms within the EDA community. However, we believe that NeurIPS remains a suitable platform for the publication of our work for the following reason. Our work is mainly about physics-informed deep learning, which combines principles from physics in a specific field with deep learning techniques, and aligns well with the interdisciplinary nature of NeurIPS. To give examples, there are some recent EDA-related papers accepted by premier AI conferences, which makes us firmly believe NeurIPS can effectively draw attention from both the machine learning and EDA communities and be more confident to publish our work through this platform. --- Rebuttal 2: Title: Respond to authors' rebuttal Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. --- Rebuttal Comment 2.1: Title: Reminder Comment: A reminder of this. --- Rebuttal 3: Comment: Dear reviewer #ZUNz, Thank you very much for your time and efforts in reviewing our paper. If any parts of the rebuttal or the paper felt unclear or ambiguous, could you kindly highlight them? Your insights will be greatly appreciated.
Rebuttal 1: Rebuttal: ## General Response for Common Questions Thanks for all reviewers' constructive suggestions, which help us find some points we didn’t explain clearly. We believe it’s necessary to make some global clarifications toward the following points: ### 1. Significance of our work. Timing analysis is crucial for ensuring the proper functioning, performance, and reliability of integrated circuits. Fast and accurate timing prediction can greatly reduce the runtime overhead, which is very significant for IC design. In this paper, we propose a tree-based graph neural network, SyncTREE, to speed up the timing analysis, the significance of our work is listed below: - To the best of our knowledge, this is the first work applying GNNs to directly make timing predictions, which can offer a good baseline for future works. - Our model extends beyond conventional GNNs and presents the best representation power for RC trees. - Compared with SPICE, our model achieves satisfactory accuracy with far less runtime overhead, which means our model has the potential to expedite the IC design process. - Conventional timing analysis tools like static timing analysis (STA) have limited parallelism capability, slowing down analysis and optimization tasks, while our model is GAT-based and thus can be parallelized easily. ### 2. Dataset clarification. Our Synthetic dataset and RISC-V dataset are composed of various circuits having different sizes and structures. In our works, we devise a timing prediction model that can be applied to different circuits by leveraging the representation capability of Graph Neural Networks for unstructured data. When given a new and unseen circuit, we don’t need to retain the model. ### 3. Running platform information All experiments in this paper are implemented with PyTorch 1.13.1 and PyTorch Geometric 2.2.0 frameworks and executed on a Ubuntu server equipped with Intel Xeon Gold 6330 CPU with 56 cores/2 threads running at 2.0GHz. The reference SPICE simulations are carried out with the commercial Synopsys HSPICE simulator on an AMD Ryzen 3950X with 16 cores/32 threads at 3.5GHz. ### 4. MAE metric interpretation Acceptable propagation delay MAE depends on the specific requirements and performance criteria of the circuit and its intended application. Some applications, such as high-performance computing or communication systems, demand minimizing propagation delay error to ensure accurate timing and reliable operation. Considering the current highest clock rate of CPU doesn’t surpass 10 GHz, which means the time unit for one execution cycle of CPU is above 100 ps. Therefore the 0.05 ps error example reviewer #YDvu posted for IC design is pretty small. Moreover, it should be pointed out that since our goal is to compare with SPICE from both accuracy and computation cost, we set timing results obtained by SPICE simulation as golden and present the percent error between SPICE and our model in Fig. 3 (b) (d) and Fig.4 (b) (d). Pdf: /pdf/3ad87d1d60ee423c00e8591c4f1a1e18f65b2a78.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
EDGI: Equivariant Diffusion for Planning with Embodied Agents
Accept (poster)
Summary: The paper proposes a new $\mathrm{SE}(3) \times \mathbb{Z} \times \mathrm{S}_n$-equivariant diffusion model based on the symmetriesThe empirical results demonstrate that the proposed EDGI (Equivariant Diffusion for Generating Interactions) model exhibits enhanced efficiency and superior generalization capabilities, even when applied to unseen tasks. Most RL methods have the issuses of sample-inefficient and lack robustness to the changes of the environment. This paper introduces spatial, temporal, and permutation symmetries into the diffusion model. Strengths: * The idea is simple and effective. The paper introduces the equivariance into diffusion models, resulting in improvements in generalization performance for unseen tasks. * The paper designs a novel equivariant U-net architecture that incorporates temporal, object, geometric layers for symmeetries and internal representations for different symmetries. Weaknesses: * Equavirance has been explored in both RL and model-based approaches. What's the difference with the prior works with equivariance? Need comparison with other baselines that also use equivariance. * It lacks analysis and ablation studies about the equivariance. For example, there are three types of equivariance. Which has the most improvement to the performance? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Needs more analysis and comparison with previous methods that also use equivariance at both method level and experiment level. * Which has the most improvement to the performance among equivariance? * Can you explain more clearly how to solve soft symmetry breaking and maybe provide some experimental results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very appreciative of the reviewer’s time and effort in reviewing our manuscript. We are grateful to hear that the reviewer finds our idea to be “simple and effective” and that the reviewer thinks the addition of equivariance is indeed reflected in “improved generalization” on unseen tasks. We also pleased to hear that the reviewer found our EDGI architecture to be “novel” in the manner in which we incorporate temporal, object, and SE(3) symmetries. We thank the reviewer again for their constructive criticism and now respond to the main clarification points below. **Comparison to other equivariant methods in RL** We agree that more baselines would be beneficial. We would be grateful for any concrete suggestions of appropriate baselines to compare to. We are not aware of any RL baselines that are equivariant with respect to similar product groups of all relevant symmetries. To the best of our knowledge, our work is the first equivariant RL method with this property, and as such we believe there is no direct comparison. Similarly, we are not aware of any equivariant methods that solve (MB)RL through generative modeling. We will however extend our discussion of equivariant RL methods. These involve model-free methods that train SE(3)-equivariant DQN or SAC versions. In contrast, EDGI is a model-based approach that leverages recent advances in conditional generative modeling and is guaranteed to be equivariant to a full product group. Other recent work has focused on finite subgroups [3] which is an easier modeling problem. We further argue none of the prior work considers equivariance in an offline RL/ conditional generative modeling setting for robotics. As the reviewer notes, there have been recent developments in equivariant MBRL [4][5]. In [4] the authors consider symmetry with respect to $C_4$, the group of 90-degree rotations, though their evaluation is limited to toy 2D environments. [5] on the other hand uses SE(2)-equivariant steerable convolutions, suitable for image data, but not 3D environments like we consider. For now, we added one additional baseline: the Diffuser architecture trained with data augmentation. We discuss this experiment and show results in the overall response. **Equivariance ablation studies** We thank the reviewer for this suggestion. We designed ablation models that are equivariant with respect to only parts of the product symmetry group, and experimented with them on the navigation task. Our results are shown in Fig. 3 of the rebuttal result page and discussed in the overall response. Essentially, we find that a model that is equivariant with respect to SE(3), but not $S_n$, and a model that is equivariant with respect to $S_n$, but not SE(3), both perform stronger than the Diffuser baseline, but do not quite reach the sample efficiency of EDGI. This provides evidence that both symmetries are important. **Soft symmetry breaking** We thank the reviewer for pointing out that our explanations of symmetry breaking were not sufficiently clear. We will expand the description in the paper, but the essential idea is the following: Because EDGI builds on the Diffuser paradigm and treats (MB)RL as a diffusion problem, the final behaviour consists of three components – a trained, equivariant diffusion model; a task-specific reward guide; and an initial or current state. The first component is by constructions equivariant, while the latter two allow us to softly break the symmetry *if this is desired*, for instance because of a non-invariant task specification. In the EDGI framework, this form of soft symmetry breaking is very natural. We already demonstrate this feature in our experiments. In the robotic manipulation environment, we consider both unconditional block stacking, which does not use a reward guide and maintains permutation equivariance, as well as two conditional stacking tasks, in which equivariance is broken by the task specification (e.g. “stack the blue block on top of the red block”). Specifically, each of the conditional and rearrangement tasks in Table 1 requires the Kuka arm to interact with blocks in the environment by stacking them in a particular order (see Fig 1 in our 1-page PDF for a visual of the environment) which requires breaking permutation symmetry due to the specific order of blocks in stack needed to get reward. Please see Fig. 1 in our 1-page PDF for a visual illustration of the task. EDGI achieves good rewards on all tasks, showing the strength of the combination of an equivariant base model and a non-equivariant task-specific reward guide. We thank the reviewer for their valuable feedback and great questions. We hope that our rebuttal fully addresses all the important salient points raised by the reviewer and we kindly ask the reviewer to potentially upgrade their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise. **References** [1] ​​Wang, Dian, Robin Walters, and Robert Platt. "$\mathrm {SO}(2)$-Equivariant Reinforcement Learning." arXiv:2203.04439. [2] Mondal, Arnab Kumar, et al. "Eqr: Equivariant representations for data-efficient reinforcement learning." ICML 2022. [3] Zhu, Xupeng, et al. "On Robot Grasp Learning Using Equivariant Models." arXiv:2306.06489. [4] Deac, Andreea, Théophane Weber, and George Papamakarios. "Equivariant MuZero." arXiv:2302.04798. [5] Zhao, Linfeng, et al. "Integrating Symmetry into Differentiable Planning with Steerable Convolutions." ICLR 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Most concerns are addressed. I will update the score accordingly.
Summary: This paper proposes an enhancement to a planning/model-based RL method leveraging diffusion models. Specifically, the diffusion model is structured to be equivariant to the known symmetries of reasoning about objects in 3D space, namely translation symmetry, time shift symmetry, and permutation of object labels in the scene. The paper proposes a modeling approach which improves performance on navigation and block stacking benchmarks. The improvement is modest over the baseline Diffuser framework, but taking symmetries into account dramatically improves performance in low data regime and in regimes where the evaluation is performed in a setting that is enforced to be symmetric to the training setting. Strengths: Very well argumented approach. Clear writing. Sound experimental results. Weaknesses: The paper up to section 3.1 is very repetitive and could be made more concise, leaving more space to introduce some of the modeling details that were left to the appendix. The ROI of the approach (ratio of improvement over the additional complexity introduced by the model) is limited. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 'training and planning are currently expensive' - do you have more specific details about the overhead, e.g. compared to the Diffuser approach? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations of the approach have been adequately discussed, modulo my question above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive appraisal of our work! We are delighted to hear that the reviewer finds our approach to have solid motivations and arguments. We are also heartened to hear that the reviewer views our manuscript to be “clearly written” and to contain “sound experimental results”. We now address the main clarification points. **Paper is repetitive up to section 3.1** We acknowledge the reviewer's concern regarding the lack of concision on our ideas up to section 3.1. We believe it is useful to reiterate the key design objectives in our architecture, especially considering that our work is at the intersection of MBRL, diffusion generative models, and equivariant machine learning. By emphasizing the role of equivariance, we aimed to motivate the key modeling decisions in EDGI. For instance, SE(3) equivariance directly leads to our geometric layer, while permutation symmetry leads to the permutation layer. Nevertheless, we agree with the reviewer that these points could be more concise. We will update the manuscript to give much more detail on our specific symmetric groups, their corresponding group actions, and representations as well as a primer on group and representation theory in the appendix. We will also add more details on our architecture and explicitly show its equivariance properties, following the logic we sketch in the global response. **Limited ROI** We respect the reviewer's apprehension regarding the ROI with respect to the complexity added by EDGI. However, we would like to politely push back as the inclusion of symmetries into the Diffuser architecture is not an optional design decision but rather a key aspect of modeling since the state and action spaces of our environments are *guaranteed* to contain this rich geometric structure. Furthermore, EDGI follows a simple design philosophy and is plug and play in the sense that each layer operates on one specific symmetry group. Please see our global response for a detailed description on EDGI’s equivariant network design including new ablations that highlight the increased sample efficiency of being equivariant to the full product group. **Training and planning complexity** We appreciate the reviewer's comment regarding the training and planning complexity of EDGI. We provide extensive details on this aspect in our global response as well as the 1-page document which contains training loss curves and inference costs for EDGI as well as the original Diffuser. In summary, we find that EDGI converged about 4x faster during training measured by the number of iterations until we observe a plateau in the loss. Moreover, as we demonstrate in Fig 3. our experiments in section EDGI also enjoys significant gains in sample efficiency over the vanilla Diffuser architecture. However, EDGI does incur a large overhead per iteration. We believe much of this additional overhead can be overcome with implementation optimizations in line with recent efforts [1][2], which we leave as exploration for future work. We thank the reviewer again for their valuable feedback and great questions. We hope that our rebuttal addresses their questions and concerns and we kindly ask the reviewer to consider upgrading their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise. **References** [1] Liao, Yi-Lun, and Tess Smidt. "Equiformer: Equivariant graph attention transformer for 3d atomistic graphs." arXiv:2206.11990. [2] Esteves, Carlos, Jean-Jacques Slotine, and Ameesh Makadia. "Scaling Spherical CNNs." arXiv:2306.05420. --- Rebuttal Comment 1.1: Comment: Thank you for the responses.
Summary: The paper introduces the Equivariant Diffuser for Generating Interactions (EDGI), an $SE(3)\times \mathbb{Z} \times S_n$-equivariant diffusion model for model-based reinforcement learning. The proposed method maintains equivariance with spatial symmetry as depicted by $SE(3)$, the discrete time translation symmetry signified by $\mathbb{Z}$, and the object permutation symmetry symbolized by $S_n$. The paper further theoretically analysis on the conditions under which the samples from an equivariant diffusion model will be group invariant or symmetry-breaking. Finally, experimental evaluations were conducted in both manipulation and navigation tasks, demonstrating that the proposed method surpasses the performance of non-equivariant baselines. Strengths: 1. The method considers a large variety of symmetries, all of which are common in many robotic tasks. 2. The concept of using a sequence of three equivariant layers to handle the three distinct symmetries is novel and intriguing. 3. The generalization experiment demonstrates convincing results. Weaknesses: The experiment section could be more comprehensive. First, the paper does not provide an ablation study to justify the three symmetries considered. Will removing one or two of the symmetries harms the performance? Which of the three symmetries contributes the most to the success of the architecture? Second, a data augmentation baseline can also be considered. Though it is widely demonstrated that the equivariant network architecture normally performs better than the learned equivariance through data augmentation, it is still a valuable experiment to validate the proposed network architecture. Moreover, will data augmentation + equivariant network yield even better performance? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Although the proposed sequential approach for handling three different types of symmetries is conceptually sound, a theoretical understanding would be beneficial. Specifically, can the author provide a theoretical analysis that the operation on one of the three equivariant layers will not influence the other two equivariant properties? 2. In some of the experiments (e.g., Navigation in Table 1), EDGI does not significantly outperforms the baseline Diffuser. This is different from what normally is observed in the equivariant learning literature when comparing an equivariant approach vs. a non-equivariant approach. It would be helpful if the authors could provide some analysis on this. 3. Why is the hidden layer in the form of $\rho_0 \oplus \rho_1$? Will adding higher frequency signal in the hidden layer (i.e., $\oplus_0^k \rho_k$ where $k>1$) improve the performance? 4. Some figures of the experimental domains would be helpful (at least in the appendix) to better understand the environments. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper addresses its limitation, but the discussion could be expanded. For instance, the proposed method seems highly constrained on the input data type, which could make it challenging to extend the proposed method to visual inputs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review and constructive criticism. We are glad that the reviewer thought the way that we incorporate symmetries in EDGI to be “novel and intriguing” and that the reviewer found our generalization experiments to demonstrate “convincing results”. We now address the main questions raised by the reviewer. **Will removing one or two of the symmetries harm the performance?** Good question. We answer it in detail in our global response to all reviewers. In summary, we implemented a new ablation model that is equivariant with respect to $S_n$, but not SE(3), and a second ablation model that is equivariant with respect to SE(3), but not to $S_n$. As we show in Fig. 3 of the rebuttal response page, these methods perform much better than the Diffuser baseline, but are not quite as sample-efficient as EDGI with the full symmetry group. This provides more evidence that it is worthwhile to be equivariant with respect to each symmetry found in our robotics environments. **Data augmentation baseline** Thanks for the suggestion. We trained an additional Diffuser model on the navigation environment, augmenting the data with random SO(3) rotations for each sample. Fig. 3 in the rebuttal result page shows the results. We find that data augmentation substantially improves the Diffuser model in terms of sample efficiency and generalization across the symmetry group, though EDGI still maintains a performance benefit in terms of sample efficiency. **Data augmentation for equivariant networks** This is an interesting suggestion. However, as long as we only consider data augmentation through symmetry transformations (like rotating the scene or permuting the objects), data augmentations cannot help equivariant methods. Informally, the reason for this is that original and transformed data lie in the same orbit of the symmetry group. This means that both the original and transformed data are acted on by the same network weights in the same way, and lead to the same loss and network updates. Thus we expect no benefit from data augmentation for equivariant networks. **Theoretical analysis of equivariance** In our global response, we demonstrate explicitly that our novel network layers leave the dimensions of the data it does not act on unaffected, and that it is equivariant with respect to the whole symmetry group. **Performance sometimes on par with baseline Diffuser** We agree with the reviewer that in our navigation environment when training on a large dataset the performance of EDGI is roughly on par with the baseline. However, we still find two key advantages for our equivariant method: it is much more sample efficient than Diffuser, as shown in Fig. 3 of our paper, and it generalizes more robustly under the symmetry group, as shown in the right column of Table 1 of our paper. **Choice of representations** We thank the reviewer for this great question. We chose to use a direct sum of the $\rho_0$ and $\rho_1$ representations for three reasons: it is simple, computations are relatively cheap, and most real-world geometric quantities can be expressed in these representations as they are scalars and vectors. However, EDGI *can* be extended to include higher representations. Doing so would certainly increase the computational cost of the geometric layers. It is an intriguing question whether that will be offset by benefits in expressivity. In some domains like molecular dynamics, including higher representations has been very advantageous, see e.g. [1] and the architectures referenced in there. At the same time, there are some results that show that latent scalar and vector representations are enough to represent any equivariant map between vectors [2]. Our EDGI architecture is built on the theoretical foundations of [2] and we leave investigations of higher order representations in the MBRL domain as natural directions for future work. **Illustrations of environments** Thank you for the suggestion. In Fig. 1 of the rebuttal response page, we show a rendering of the robotic manipulation environment. For the navigation environment, please have a look at Fig. 1 of the paper. We will update our appendix to include these figures. **Constraints on the input data type** Indeed, we currently assume access to a representation of the data in terms of group representations. We will stress this in the final version of the paper. We consider learning from raw visual inputs and using symmetries in this setting as a natural direction for future work. We thank the reviewer again for their time and efforts reviewing our work. We hope that our rebuttal was successful in addressing all the great points raised by the reviewer and allows the reviewer to consider a fresher evaluation given our rebuttal as context. Finally, please also have a look at our global response. In addition to the important baselines and ablations you suggested, we also show there that EDGI converges faster than the Diffuser baseline. **References** [1] Joshi, Chaitanya K., et al. "On the expressive power of geometric graph neural networks." arXiv:2301.09308. [2] Villar, Soledad, et al. "Scalars are universal: Equivariant machine learning, structured like classical physics." NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: The reviewer appreciates the author's great rebuttal, most of my concerns are addressed. I would like to increase my evaluation to Weak Accept.
Summary: The paper introduces the Equivariant Diffuser for Generating Interactions (EDGI), a novel algorithm for model-based reinforcement learning (MBRL) and planning. It addresses the challenge of structured environments with spatial, temporal, and permutation symmetries, which are often overlooked by existing planning and MBRL algorithms. EDGI leverages the concept of equivariant diffusion to maintain symmetry under the product of SE(3), Z, and Sn symmetry groups. The algorithm achieves improved sample efficiency and generalization by incorporating a new SE(3) × Z × Sn-equivariant diffusion model that supports multiple representations. Strengths: 1)Novel Approach: The idea of equivariant diffusion is innovative and introduces a fresh perspective on addressing symmetries in planning and MBRL. 2) Multiple Representations: The EDGI algorithm supports multiple representations, which enhances its flexibility and applicability to a wider range of tasks. Weaknesses: Lack of Clarity in Conceptual Explanation: The introduction of equivariant symmetries could have been more accessible, with clearer explanations of mathematical notations, making it easier for readers to comprehend the concept. Insufficient Emphasis on Sample Efficiency Improvement: The paper could provide a more explicit explanation of how equivariant symmetries contribute to improved sample efficiency. For instance, the relationship between symmetry breaking and the ability to transfer equivalent trajectories needs further clarification. Does the symmetry breaking approach allow for easy transfer of trajectories between equivalent states, as indicated by Figure 1? Limited Discussion on Network Design: The paper lacks a thorough discussion on how the network architecture is designed and whether it guarantees the preservation of equivariant symmetries. It would be beneficial to elaborate on the relationship between the network structure and the preservation of equivariant symmetries. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Can you provide a more intuitive explanation of equivariant symmetries and their role in achieving sample efficiency and generalization? How does the proposed equivariant diffusion model differ from traditional diffusion models, and how does it support multiple representations? 2) Could you elaborate on the specific network design choices and how they ensure the preservation of equivariant symmetries? 3) In practice, what are the computational costs associated with training and using the equivariant diffusion model, particularly for tasks with larger symmetry groups or high-dimensional state and action spaces? 4) Are there any inherent trade-offs between achieving equivariant symmetries and other performance metrics, such as computational efficiency or convergence speed? How does EDGI address these trade-offs, if any? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1) Complexity of Symmetry Breaking: Although the paper mentions soft symmetry breaking through conditioning and classifier guidance, it does not delve into the challenges and limitations associated with breaking symmetries in complex environments. Further exploration of the limitations and potential difficulties in achieving effective symmetry breaking would provide a more realistic perspective. 2) Scalability: The scalability of the proposed equivariant diffusion model is not thoroughly discussed. It remains unclear how the algorithm's performance scales with increasing problem complexity or the size of the symmetry group. A deeper investigation into the computational requirements and scalability of the approach would be valuable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and nuanced comments. We are glad that the reviewer found our contribution to be novel and that it provides “a fresh perspective on symmetries in planning and MBRL”. We also appreciate the fact that the reviewer valued the “flexibility” provided by EDGI through multiple representations allowing it to extend to “a wider range of tasks.” We now address their key questions and concerns. **Improving accessibility on equivariance** We thank the reviewer for their constructive criticism regarding our exposition of symmetries and equivariance. We agree with the reviewer that our coverage of the background material may not have been sufficiently clear in the initial manuscript. Toward this end, we will update the paper to include a more comprehensive coverage of the exact groups we consider in this work. In particular, we will add more detail on the groups SO(3) and $S_n$ along with their representations and actions on our state and action spaces. We will further supplant this with a short primer on group and representation theory in the appendix along with detailed pointers to complete works for the interested reader in the main text. **Sample efficiency** We thank the reviewer for highlighting an important aspect, ultimately a main selling point for equivariant architectures: equivariance improves the sample efficiency of training. Essentially, the equivariance constraint ensures that when the network encounters a single sample in training, it learns not only how to transform that sample, but also all data points that can be generated from the sample through symmetry transformations like rotations or permutations. Hence, less samples are necessary to reach a strong, robust performance. This benefit of equivariance for sample efficiency has been demonstrated both theoretically and empirically in fields from molecular dynamics to robotics, see for instance [1] [2] [3] [4]. Sample efficiency is particularly important in the robotics case, as collecting training data can be expensive. The reviewer also asks about the role of symmetry breaking in the transfer of trajectories between equivalent states. This is a subtle point (and we will improve its discussion in the final version of the paper). Symmetry breaking consists of two aspects: conditioning on the initial state, as well as on guidance from a reward model. The conditioning on the initial state, together with the equivariance of the denoising model, is what ensures that we can easily transfer learned behaviors to transformed conditions. The guidance from the reward model, on the other hand, can be used to make the model behave *differently* on rotated situations, if this is desired. For instance, if a task consists of moving an object in a particular direction, then after the initial state is rotated, the agent’s behavior needs to adapt by more than just rotating. Reward guidance allows us to achieve that. **Network design choices and equivariance** We value the reviewer's feedback and agree that the paper could be improved by a richer discussion on our network design. How our architecture ensures equivariance is indeed crucial. We address it in the global response. In a nutshell, we can explicitly show that each layer in EDGI is equivariant to $\mathrm{SE(3)} \times S_n \times \mathbb{Z}$. **Computational complexity and scaling** We agree that the computational costs of EDGI is an important aspects that deserve further discussion. We address it in the global response. Essentially, in its current implementation EDGI has substantial overhead over the baseline Diffuser, but we see plenty of room for optimization. What’s more, EDGI converges substantially faster, offsetting the computational overhead. Please see our global response for a more thorough discussion. As for scalability, EDGI has favorable scaling properties in the three relevant directions – number of objects $n$, number of time steps $H$, and number of channels $c$. Thanks to its modular structure with alternating layers attending to the different data axes, it has a time complexity of $\mathcal{O}(n^2 H c^2)$. When an efficient attention implementation is used, the memory complexity is even just linear in $n$. We will add a thorough discussion of this scaling behaviour to the final version of our paper. **Symmetry breaking** We thank the reviewer for pointing out that our explanations of symmetry breaking and its complexities were not sufficiently clear. We will expand the description in the paper, but we believe that the essential idea follows quite naturally from the framework of treating RL as a diffusion problem. The overall behavior of the agent consists of three components – a trained, equivariant diffusion model; a task-specific reward guide; and an initial or current state. The first component is by constructions equivariant, while the latter two allow us to softly break the symmetry *if this is desired*, for instance because of a non-invariant task specification. All this requires is a non-equivariant reward model, for instance a simple MLP. We would like to thank the reviewer again for their thorough review. We hope we have sufficiently addressed their comments, and we respectfully ask the reviewer to reconsider their impression of the paper and potentially improve the given score. We look forward to discussing any further questions. **References** [1] Bietti, Alberto, Luca Venturi, and Joan Bruna. "On the sample complexity of learning under geometric stability." NeurIPS 2021. [2] Behboodi, Arash, Gabriele Cesa, and Taco S. Cohen. "A pac-bayesian generalization bound for equivariant networks." NeurIPS 2022. [3] Jumper, John, et al. "Highly accurate protein structure prediction with AlphaFold." Nature 2021. [4] Wang, Dian, et al. "On-robot learning with equivariant models." arXiv:2203.04923. --- Rebuttal Comment 1.1: Title: Thanks for the author's response Comment: The authors' response has addressed my questions. I now have a clearer understanding of the algorithm's details. I am satisfied with this response. I have improved my score.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thorough reviews and valuable feedback. We are encouraged that they found our approach of equivariant diffusion for planning “innovative” (reviewer **FPx8**), “simple and effective” (**kPhe**), and “very well argumented” (**28Jy**). In particular, they appreciated that our method supported products of multiple symmetry groups (**LvgJ**) and multiple representations (**FPx8**). We are also happy to hear that they found the experimental results “convincing” (**LvgJ**) and the writing “clear” (**28Jy**). **Showing equivariance (FPx8, LvgJ)** We now show more thoroughly that our architecture is equivariant.This is best demonstrated explicitly, by proving for each network layer $f(w)$ and each symmetry group $G$ that $f(g \cdot w) = g \cdot f(w)$ for any data $w$ and group elements $g \in G$. Here $\cdot$ denotes the group action. In the following we will do this for our geometric layers, as they are the most novel, for both SE(3) transformations and permutations. Let $w \in \mathbb{R}^{n \times H \times c \times 4}$ be data in our internal representation, such that the entries $w_{toc}$ decompose into SO(3) scalars $s_{toc}$ and SO(3) vectors $v_{toc}$. Let $S(w_{to})$ be the set of all scalars and all pairwise $SO(3)$ inner products between the vectors $v_{to}$, as discussed in line 221 in the main paper. The outputs of the geometric layer are then $f(w)\_{toc} = (\phi( S(w\_{to}) ), \sum_{c’} \psi( S(w\_{to}) ) v\_{toc’} )$. First, consider what happens under permutations of the objects, $w_{toc} \to w_{t\pi(o)c}$ for a permutation $\pi \in S_n$. We have $f(\pi \cdot w)\_{toc} = (\phi( S(w\_{t\pi(o)}) ), \sum\_{c’} \psi( S(w\_{t\pi(o)}) ) v\_{t\pi(o)c’} ) = f(w)\_{t \pi(o) c} = (\pi \cdot f(w))\_{toc}$. Thus, because this layer “leaves the object dimension untouched”, it is equivariant with respect to object permutations. An analogous argument can be made for temporal translations. Finally, consider the behavior under spatial transformations. Like most (S)E(3)-equivariant architectures, we deal with translations through canonicalization, defining all coordinates with respect to the center of mass or the robot base, as applicable. This means we only have to analyze the behavior under rotations. Let $R \in \mathrm{SO(3)}$, such that $R \cdot w = R \cdot (s, v) = (s, R \cdot v)$. By definition, orthogonal matrices leave the inner product invariant, thus $S(R \cdot w) = S(w)$. The geometric layer applied to rotated inputs then gives $f(R \cdot w)\_{toc} = (\phi( S(R \cdot w\_{to}) ), \sum\_{c’} \psi( S(w\_{to}) ) R \cdot v\_{toc’} ) = (\phi( S(w\_{to}) ), R \cdot \sum\_{c’} \psi( S(w\_{to}) ) v_{toc’} ) = (R \cdot f(w))\_{toc}$. Hence the geometric layer is equivariant with respect to SE(3). **Computational cost (FPx8, 28Jy)** There are two aspects to the computational cost of EDGI. First, EDGI has a more complex computational graph than the baseline Diffuser. In the current implementation, each step (forward plus backward pass) takes roughly 5.5 times as long as that of a baseline model. We believe that this is not a fundamental property of our method, but just reflects the lack of optimization. We see lots of potential for further speed-ups by optimizing the memory layout of our representations,the contraction paths in tensor multiplications, using optimized attention implementions, or compiling the computational graph. We plan to investigate them in the future. Second, thanks to its stronger inductive biases, EDGI requires substantially fewer optimizer steps to converge, roughly by a factor of 4. We illustrate this in Fig. 1 of the attached PDF. This at least partially makes up for the computational overhead. **Additional baselines (kPhe, LvgJ)** As suggested by reviewer **LvgJ**, we ran the navigation experiment with a version of the Diffuser model trained with data augmentation. We focused on the spatial symmetry group and transformed each sample with SO(3) rotations sampled uniformly from the Haar measure. The results are shown in Fig. 3 on the attached PDF. We find that data augmentation substantially improves the sample efficiency of the Diffuser model. However, EDGI still maintains a performance benefit in the low-data regimes.. **Ablating the effect of different symmetry groups (LvgJ, kPhe)** As suggested by the reviewers we investigated the importance of each of the three symmetry group towards final performance. We designed two ablation models: one equivariant with respect to SE(3), but not $S_n$; the other equivariant with respect to $S_n$, but not SE(3). Both models are equivariant to temporal translations, just like EDGI and the baseline Diffuser. In Fig. 3 on the attached PDF we show how these models perform on the navigation task. Both of these partially equivariant models outperform the baseline Diffuser in terms of sample efficiency, and come closer to EDGI. To our surprise, the permutation-equivariant architecture performed slightly better, indicating large benefits of permutation equivariance. However, using EDGI’s full symmetry group is still the most sample efficient in low-data settings. We hope that with this global response and the individual responses to the reviewers, we were able to address all questions. We look forward to further discussion. Pdf: /pdf/0f8aae65ed7a5995674a0ce67730406a3c097a2d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Universal Policies via Text-Guided Video Generation
Accept (spotlight)
Summary: This paper introduces a novel approach to text-conditioned video generation task by treating it as a goal-conditioned RL, where the text is formulated as the goal and the multi-step image sequences in the video are the consecutive observations. The proposed method, termed Unified Predictive Decision Process (UPDP), aims to learn effective control strategies. Compared to traditional MDPs, the sequences in UPDP are determined by both the initial frame and the task description. This enables bypassing reward design and facilitates non-Markovian modeling. To model decision-making within UPDP, this paper introduces UniPi, which consists of two key components: i) a diffusion model planner and ii) a task-specific action generator. Empirical evaluations on various control tasks demonstrate the effectiveness of the proposed method. Strengths: 1. This paper is clear to follow. 2. The proposed method is effective and seems reasonable. 3. A large number of experiments showing performance gain. 4. Synthesized Frames are illustrative and interesting. Weaknesses: 1. While the high-level idea presented in this paper is intuitive and reasonable, the detailed descriptions of certain specific modules require further emphasis. For instance, the video-based planner $\rho(\cdot|x_0,c)$ is employed to generate image sequences (or videos). However, the role of the action in this context is not clearly defined. If the planner is unrelated to the action, it raises questions regarding how the action can influence the state. From the statement in lines 115-116 showing that "This design choice isolates planning decisions from action-specific mechanisms, allowing the planner to be environment and agent agnostic", it seems suggest that this module is indeed a forecaster rather than a planner. 2. Text-conditioned generation has gained recent popularity. Although the paper states that planning through video generation poses challenges due to the need for specific initial images to complete tasks, guided generation (conditioned on an initial image and a language description) is a common choice that has been extensively studied. Similarly, inverse dynamics is also a well-explored method that has been proven effective. Therefore, the combination of these techniques may not provide significant novelty from a model design perspective. 3. Some details in the paper may lead to misunderstandings. For example, in section 3.1, the author proposes tiling as a means to ensure trajectory consistency. However, two questions arise: i) Does tiling involve concatenating the generated frame with the initial image of the video or the ground-truth image at a specific timestep? ii) How exactly does tiling provide trajectory consistency? I recommend the authors extend this paragraph with more details. 4. Figure 2 appears somewhat irregular, as the widths of the noises applied to video diffusion are equal, whereas the noises applied to Temporal Super Resolution vary in width. Furthermore, the outputs of the video diffusion module also exhibit different widths. Ensuring consistency in the widths of these elements would enhance the clarity of the figure. 5. The disentanglement between the planner module and the policy network is presented as a strength of the model. However, it remains unclear how the policy network can impact the planner or interact with the environment in such cases. Section 3.2 attempts to explain this relationship, but the details provided are not easy to follow. It is hypothesized that the video planner solely provides synthesized frames for reference (denoted as $\bar{o}$). However, it is unclear whether, when the environment provides an observation $o$, the inputs for inverse dynamics should be $(o_t, \bar{o}_{t+1})$ or $(\bar{o}\_t, \bar{o}\_{t+1})$. Further clarification is needed in this regard. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The specific difference between the super-resolution and low-temporal-resolution architectures mentioned in lines 179-181 appears to be that the former employs the observation tiled as a condition, while the latter does not. It is unclear whether the super-resolution and low-temporal-resolution architectures refer to the hierarchical structure mentioned in lines 184-194. To enhance clarity, I suggest the authors restructure lines 174-183 accordingly. Alternatively, if these architectures are not the hierarchical structure, the authors should provide a clear explanation or visual representation, such as a model figure, to elucidate the architectures for improved readability. 2. In Appendix A.3, it is unclear whether "replicate future controls across different pixel locations" serves the purpose of concatenating state-action pairs. Further clarification is needed to understand the specific role and functionality of this mechanism. The other questions are above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As the authors showed, the video diffusion process is very slow, even with some fast sampling methods. It's not suitable for real-time robotic control for now, while might be an interesting domain to explore in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments – please see our response below. Feel free to let us know if you have additional questions or comments. > Detailed Description of Modules. We will clarify the detailed description of the modules. Our video planner seeks to construct a sequence of states that represent a plan going from our original start state to the desired final goal state. Since the planner only has to generate a set of feasible states transitions to go from the start to goal state, it does not need to worry about the precise actions to generate to reach each feasible state, which can environment specific (i.e. some environments require you to execute a sequence of actions to open a door while other let you go directly through the door). > Novelty of Text-Conditioned Generation. Most existing works in text-to-video generation have focused primarily on their application to AI-CG and most large scale text-to-video models (Phenalki, Make-A-Video, Imagen-Video) typically only generate frames conditioned on natural language. The primary novelty of our approach is formulating the problem of acting as a text-to-video generation problem, where given a language description of goal, we synthesize a set of frames of video denoting how we will act. This further requires some domain specific designs, such as how to condition video generation on observed images, where we propose trajectory observation tiling. > Tiling to Ensure Trajectory Consistency. The tiling referred to in this section corresponds to tiling the first observed image to each generated frame in the video. This operation ensures that the background details in the first observed image are more readily captured across each frame of the synthesized video as when denoising each frame in the video, the full conv architecture sees (starting with the first convolution) the direct concatenation of an intermediate noisy video with the first observed frame of the video. We will clarify this in the paper. > Figure 2. We have attached a modified version of Figure 2 with equal width in image outputs to the main rebuttal response PDF. Feel free to let us know any other modifications we can do on the figure to improve clarity. > Clarification on Inverse Dynamics. To infer an action using the inverse dynamics model, we take as input $o_t, \overline{o}_{t+1}$, the current observed frame and the next synthesized frame. We will clarify this in the paper. > L179-L181. The trajectory consistency through observation tiling section refers to the base video diffusion model in Figure 2, where we want to synthesize a low resolution video given the observed image and text-instruction. The temporal super-resolution model we refer to is not related to the one discussed between L184-L194, but rather the temporal super-resolution architecture from Imagen Video. We will clarify this reference in the main paper. > Baseline Details. In our state based diffusion model, we want to diffuse a sequence of states to accomplish the instructed language task. To apply our video-diffusion model architecture to this setting, we take each D dimensional state and tile it to form a HxWxD image. We then train a video diffusion to diffusion on this “state” video and use this to regress our future states to execute. We will include these details in the paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the author's rebuttal, which addresses some of my concerns/confusion. Yet, I still think the novelty is not strong enough to have an "accept". In sum, I will raise the score to "6".
Summary: This paper frames the sequential decision-making problem as a text-conditioned video generation problem. Given a text-encoded specification of a desired goal and the first frame with the initial configuration, a planner generates a set of future frames that depict planned actions. The generated video is then used to extract control actions. This framework facilitates leveraging pretrained language embeddings and large-scale internet videos to enable combinatorial generalization and knowledge transfer across diverse tasks. Strengths: 1) Casting sequential decision-making as an synthesis problem naturally allows us to leverage a growing wealth of existing research in large-scale image/video/language generative models that encapsulate valuable world models for robotics. 2) Strong experiments: both simulation and real-world tasks are included. Weaknesses: 1) Some overlap with prior work: the bulk of the framework seems to borrow heavily from Decision Diffuser[1]. Specifically, [1] also uses classifier-free guidance diffusion to generate a sequence of states, followed by some inverse dynamics modeling to identify the action to execute. The main innovations of the work are centered around substituting engineering components in prior works; e.g. replacing unconditional generation from something like Diffuser[2] with conditional generation [1] [Is Conditional Generative Modeling all you need for Decision Making?](https://arxiv.org/abs/2211.15657) [2] [Planning with Diffusion for Flexible Behavior Synthesis](https://arxiv.org/abs/2205.09991) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) I wonder how much of the combinatorial generalization exhibited by the model is dependent on the quality of the text embedding. For example, as shown in [1], using something like CLIP language embeddings can make it difficult to capture more abstract notions like "left" or "right" relative spatial positioning, affecting combinatorial generalization in some of the tasks shown in Section 4.1. Could the authors include an sensitivity analysis on the T5 embeddings? 2) The authors claim this approach allows for more universality when it comes to environment diversity or reward specification. But it seems to me that the inverse dynamics model discussed in Section 3.2 would have to be retrained for different tasks anyways. How do the parameter/data scales for this (smaller) model compare to the diffusion component? 3) How does performance for combinatorial generalization in Section 4.1 scale compared to something like BC? i.e. is there is a ceiling to training inverse dynamics on generative data -- could the authors demonstrate how performance scales with number of examples compared to something like [1] or [2]? [1] [Programmatically Grounded, Compositionally Generalizable Robotic Manipulation](https://arxiv.org/abs/2304.13826) [2] [CLIPort: What and Where Pathways for Robotic Manipulation](https://arxiv.org/abs/2109.12098) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have discussed some limitations in their concluding remarks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments – please see our response below. Feel free to let us know if you have additional questions or comments. > Overlap with Prior Work. We believe the primary novelty of our work over past work such as Decision Diffuser is the construction of a large-scale model for decision making that can be directly learned from Internet data. While Decision Diffuser, similar to our work, uses diffusion models, in combination with classifier free guidance, to predict states and actions, it assumes the presence of existing datasets of states and actions in a domain of interest. In contrast, our work is able to use the existing video information on the internet to learn to make decisions, by casting the decision making problem is that of generating a video of actions to execute conditioned on a text description of the actions you wish to take. > Effect of Language Embedding on Combinatorial Generalization. We ran an ablation experiment where we tested the combinatorial generalization ability of our approach with different sizes of language embeddings from T5. While we found that different sizes of language embeddings did really affect the video quality, we indeed found that larger language embedding sizes substantially improved combinatorial generalization (as measured by CLIP score on Bridge). | | CLIP| FID| | ---- | ---- | ---| |T5_small | 22.75 | 15.53 | |T5_large | 22.78 | 16.07 | |T5_XXL | 24.54 | 14.54 | |T5_XXL (No Pretrain) | 24.43 | 17.75 | > Inverse Dynamics Model. The inverse dynamics model is very small (<1M parameters) and trained on substantially less resources (1 TPU for 12 hours) while the video model which is large (5B parameters) and trained on substantially more resources (256 TPUs for several days). While the inverse dynamics would need to be trained per environment, the model only needs to learn to predict the action that can transition between two frames which is substantially easier to learn. > Scaling of Combinatorial Generalization. In principle, we believe that both our text-conditioned video generation approach and language conditioned BC would scale well with increased amounts of data. However, while there are plentiful amounts of videos on the internet (and many commercial large text-to-video models like Gen2 that demonstrate very good combinatorial generalization) there is much less labeled action data, making it much easier to get a combinatorially generalized version of our approach. We don’t believe there is a limit to training inverse dynamics models on generated data and believe it's actually much easier to train an inverse dynamics model than a BC policy that generalizes well as also theoretically shown in [1]. This is because the inverse dynamics model only has a task to infer an action given both present and future states while a policy must anticipate the next action to take to maximize reward across all future states [1]. If the reviewer desires, we are happy to add an additional experiment demonstrating scaling of the performance of the inverse dynamics model with number examples in the final version of the paper. [1] Brandfonbrener et al. Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have two follow-up questions/comments: 1) My concern with the effect of language embedding on combinatorial generalization is that such generalization is dependent *entirely* on using very strong text embeddings. That is, below some certain representational capacity for said embeddings, combinatorial generalization disappears entirely. This is important to me because it helps weigh the contributions of the rest of the framework (e.g. the diffusion approach to decision making), against "just using better text representations." Thus, I feel that reporting results on different sizes of T5 is not really addressing this concern, because the scale of the text data (and thus the upper bound to the semantics represented by such embeddings), is the same. What I would prefer to see is using both weaker and stronger text embeddings that are completely orthogonal to T5 (e.g. older CLIP embeddings using as in ProgramPort or CLIPort, or some of the newer open LLM embeddings). 2) I am familiar with the work of Brandfonbrener et al., but it's not clear to me their findings/analysis extend to the domain of learning on generated data. Intuitively, I agree that it makes sense that learning inverse dynamics is going to be more sample-efficient than BC here as well, but if models learned via both approaches collapse after some N samples, for an uninterestingly low N (e.g. comparable to existing real-world datasets upon which one can reasonably do BC), then that for me nullifies some core advantages of the proposed approach, even if it does marginally better than traditional BC. It would still be great if the authors could include in their revised/final draft this scaling experiment. I would also like to hear from my fellow reviewers; in the meantime, my rating stands. --- Reply to Comment 1.1.1: Title: Reply to Reviewer VoKS Comment: Thank you for your comments – please see our clarifications below: > Benefit of text embeddings We would like to clarify why we conducted the ablation on different size of T5 embeddings. We wanted to demonstrate that less powerful text embeddings (e.g., those from T5-small) can still results in successful plans despite T5-small having substantially lower language modeling and compositional performance in the original T5 paper [1]. UniPi is able to utilize T5-small’s embedding to generate successful plans on new prompts, which implies that quality of language embedding is not essential for successful plan extraction. A variety of prior works have demonstrated the efficacy of using CLIP embeddings in diffusion models. For instance the DALLE-2 model is based off the CLIP text encoder, but is able to demonstrate good combinatorial generalization across different language prompts (including simple relations between objects). An analysis of the effect of text-embeddings on text-to-image generation is studied in [2] (Figure A.5) and its found that T5-Small (60M), T5-Large (770M) (which are substantially smaller embeddings than T5-XXL (12B)) are substantially worse than CLIP in image generation metrics, suggesting that in our above analysis, CLIP embeddings would likely also successfully generate plans. It’s difficult for us to directly evaluate the effect of using CLIP embeddings in our text-to-video setup, as our existing codebase and pretrained models are based on T5 embeddings, which have a different shape that of CLIP. > Inverse Dynamics Analysis We would like to clarify that the inverse dynamics and behavioral cloning models are directly trained on real data, as opposed to generated data, so the work of Brandfonbrener et al. would apply in our setting. We take the inverse dynamics model learned on real data and apply it to generated frames, which we find to be effective, as the generative model is directly fitting the distribution of real data (and would perfectly fit it in the theoretical limit). [1] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR 2020 [2] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NeurIPS 2022
Summary: The authors utilize the enhanced capabilities of text-guided image synthesis, a recent advancement in deep learning, to engineer general-purpose embodied agents capable of sequential decision-making. The proposed method involves using language instructions as inputs to a text-conditioned video generation model, specifically video diffusion. Actions are then inferred from the generated video using a learned inverse dynamics model. As agent learning from video play has been a well-established approach in reinforcement learning and imitation learning for complex tasks, translating language instructions into video for general-purpose agents can be viewed as a universally applicable strategy. This approach addresses the challenges of diversity and heterogeneity in agent environments, offering a broader, more versatile solution that can leverage also pretrained large-scale language and language-video models. Strengths: This paper demonstrates how to harness the potential of text-guided image synthesis for agent learning, providing evaluations through multiple experimental scenarios. The paper is insightful, and it can offer several meaningful perspectives for readers interested in using LLMs and multi-modal pretrained models for agent learning. Weaknesses: The proposed method in this paper can be seen novel, marking its originality, and showcasing insights on the utilization of multi-modal models. However, I also think that the method description in Section 3.1 falls short on illustrating clear technical contributions. It seems to only incorporate the existing diffusion model and the inverse dynamics model in the proposed framework. Despite the innovative nature of the broader concept, a more detailed explanation and comprehensive analysis regarding how to improve and generalize the proposed method can enhance the clarity and overall influence of the paper; e.g., comparing with language instruction-following agents with multi-modal capabilities, and comparing with skill-based hierarchical RL approach that generate latent skill sequences (that can be also translated into actions later). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors explain the benefits of the proposed method, specifically in comparison to other instruction-following agents that can use pretrained multi-modal models? If each video frame generated can be seen as sub-goal representations in the task, and each sub-goal is dealt by the inverse dynamics model, hierarchical planning in Section 3.1 seems to be critical for the effective handling of long-horizon tasks. Could the authors more describe the procedure of sampling videos? In line 215, the open-loop controller is chosen for computation efficiency. Could the author explain more? What if the stochastic environment has some randomness? Could the authors explain why Transformer-BC and TT achieve low performance in the experiments? In Figure 5, the adaptable planning scenario is not clear. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The conclusion has specified the limitations including computation load of video diffusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback on our work! We address your questions as follows. > Highlighting contribution in method section We will update our method sections to highlight the contributions, which includes (1) how to re-purpose text-to-video models designed for media and entertainment to be a useful tool for control through frame-conditioning, (2) how to overcome difficulties around generating consistent frames across time, and (3) how to conduct hierarchical planning effectively. We believe that these contributions indeed address challenging problems in both video diffusion models and planning and control. > Procedure of sampling videos to address temporal hierarchy and long-horizon planning Generating videos hierarchically at the right granularity across time is indeed important to ensure UniPi’s performance. To sample a video plan in the simulated robotics experiments, we first sample 10 frames with a larger frame-skip between each frame (8). Then conditioned on these 10 frames, we fill in the intermediate frames, resulting in 20 frames with frame-skip 4. We found frame-skip 4 results in the right granularity for training an effective inverse dynamics model. > Advantage over language instruction following agent pretrained on multimodal data Most instruction follow agents using multimodal data are typically initialized from pretrained VLM models. These models are typically trained with captioned image and text-pairs, but typically do not contain much motion / physics information about the environment. In contrast, in our approach, we can train on a wealth of existing language annotated video data, which captures much more information about how to act in environments and the physics of the environments. This allows our approach to transfer world-knowledge about how to do particular tasks, such as precise visual motions to open a door handle which are not available in pretrained VLM models. > Open loop control A limitation of UniPi is the high computational cost of video diffusion to generate a video plan, making it costly to perform open loop control. In stochastic environments, we could choose to regenerate video plans in the case in which observations do not match the ones in our plan, or wecan batch sample video plans ahead of time switch between generated plans to address changes in the environment. There is also a wealth of work on improving the video sampling speed of diffusion models. > Poor performance of Transformer-BC and TT We believe the poor performance of Transformer-BC and TT is due to the long task horizon of demonstrations and the inability of these agents to accurately synthesize and follow the long horizon plans in this environment. As the different steps in the environment are executed, errors accumulate in the observation space and the agents fail to correctly finish the task. > Figure 5 adaptable planning clarification Figure 5 shows that in addition to using text to guide plan generation, UniPi can further utilize intermediate frames by fixing a particular future frame during sampling to guide generated plans towards moving either the left or the right block. We have updated the caption to make this clear. Please let us know if you have additional confusion. --- Rebuttal Comment 1.1: Comment: I'd like to extend my thanks for the comprehensive response, which addresses most of the concerns I raised. I have decided to maintain my original score of accept.
Summary: A general framework, the Unified Predictive Decision Process (UPDP), was proposed in this paper. It leverages images as a universal interface, texts as reward specifiers and an independent planning module for policy synthesis. Powerful diffusion model was adopted into the framework of UPDP to generate authentic future video frames and inverse dynamics was used to generate downstream policies. Strengths: 1. This paper is well written, clear and easy to understand. 2. A very intriguing introduction of UPDP framework. UPDP enables better utilization and knowledge transfer of large-scale generative pretraining which may demonstrate a broader impact in the future applications. 3. Extensive experiments demonstrate the effectiveness of the proposed method, including high-quality video generation, combinatorial generalization and multi-environment transfer. Weaknesses: 1: Diffusion model: The adapation of diffusion model to UPDP lacks novelty. Temporal superresolution and tiling the context frame are commonly used tricks in video diffusion model. Although this can be regarded as "simple yet effective", this paper can be further improved if authors can include more domain specific design into the instantiation of UPDP with diffusion models. 2. Metrics: For table 4, first there is a typo in line 302 saying "higher FID and FVD" while actually it's lower FID and FVD. Besides, FID and FVD are also not informative in this case because they only measure the distance between the distribution of generation and real data, which cannot tell if the model follow the text prompt correctly. Beyond training a classifier, a better metric could be measuring IoU of bounding boxes between groundtruth and generations, which are produced by a pretrained object detector, or human evaluation. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 7, are there images generated by the diffusion model or videos taken during the action excution? The reflection on the microoven in "Turn Faucet Left" row looks too consistent to be something generated by diffusion while this reflection is not shown in the input frame. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors talked about the potential improvement of generation speed for diffusion model part. In general, I think this is a solid and intriguing work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review. Please find our response below. > Novelty of diffusion models for UPDP We agree that frame conditioning and temporal super-resolution are not new in video diffusion models, but adapting them to control and hierarchical planning have not been done before. To our knowledge, UniPi is the first to extend video diffusion to perform planning in the image space, and the implications are significant given the continual development of text-to-video foundation models and the ability to leverage internet-scale video data to improve decision making. Furthermore, we note that the introduction of UPDP as a more practical alternative to the more restrictive MDP is also novel. We will update the manuscript to include these discussions. > IoU metric for evaluation Thank you for the suggestion. We will try this evaluation metric for the final version of the manuscript, although we suspect that using IoU of bounding boxes might have more false negatives due to the stochasticity of the generated plans (i.e., there are multiple ways to complete a task and the final frame might look different between the ground truth and generated plans, despite generated plans also complete the task). > Figure 7 clarification The videos in Figure 7 are generated by the diffusion model. We have attached a few more generated videos conditioned on the same first frame and noted the generated videos are diverse. > Typos on line 302 We have updated the manuscript to fix the typo. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: All of my concerns are well addressed. Thus, I am pleased to raise my score to accept.
Rebuttal 1: Rebuttal: We thank reviewers for their positive reviews and feedback. Reviewers noted that the paper was well written (Reviewer voju, 5Y9t), insightful (Reviewer eKLN), and had strong experiments (Reviewer VoKS). Reviewers voju, VoKS, 5Y9t had some concerns about novelty which we address below. ## Novelty The main novelty of our work lies in our formulation that enables us to leverage internet video as an effective data modality for decision making. Most recent works on leveraging internet data have used multimodal models, primarily VLMs, are trained on captioned image-text pairs, to construct agents. While such agents will have rich prior knowledge about the underlying semantics of individual images, they lack knowledge about motion of objects or their physics, and for example would not know how visually one should go around opening something like a door handle. In contrast, learning from video allows us to learn the motions of people and the world, enabling us to have prior knowledge about the mechanisms of opening a door, or what types of object dynamics or movement are possible. To leverage internet video for decision making, we propose the UPDP decision making formulation, which casts decision making in different environments to be very closely related to video generation, enabling us to transfer this wealth of knowledge. While our resultant instantiation of UPDP shows some similarity to Decision Diffuser and existing video models, these are particular instantiations of our approach, and our formulation to could be also applied to newly discover or other existing generative models of video. Pdf: /pdf/68328797b173bda59296a839cebedc082d2923f3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Active Reasoning in an Open-World Environment
Accept (poster)
Summary: This paper introduces Conan, a new benchmark for evaluating active reasoning in an embodied/open-world environment where an agent acts as a detective and must answer questions about the actions and intentions with the traces of another agent (a vandal). The experiments benchmark a RL + vision-and-language baseline and a more structured model that uses Bayesian inference on the Conan task. Strengths: - ambitious benchmark trying to tackle an exciting and interesting problem that highlights a limitation in the current focus of the community: lack of benchmarks testing reasoning/question answering with active information gathering - problem and setting are well-motivated - interesting setup for the active reasoning problem with a vandal/detective Weaknesses: - many important details that are not clear. One of the main things that needs more clarity is what the format of the benchmark is - it would be very helpful to have more examples and earlier in the paper of what the traces look like / what context is available to models, or maybe an example of a series of reasoning steps a model (or human) would go through to answer a question in Conan. See questions below for specific points of confusion - I wasn't totally convinced/clear that the questions in Conan are focused on testing abductive reasoning. Although eventually we would want to solve this with end-to-end methods like the RL baselines presented here, I feel like some heuristic exploration baselines would be important to understand what it takes to solve the benchmark and contextualize the scores of the RL agents (what scores do these agents get?): - the ideal explorer (specifically wondering: is the ideal explorer the ideal policy?) - an agent that explores the whole 64x64 grid (how much better does the ideal explorer do compared to this?) - using an oracle extractor that extracts the trace-relevant frames instead of the every-k key-frame extractor - instead of using RecurrentPPO, some heuristic extractor that allows agents to condition on key frames in the past (e.g. the frames with traces that are closely associated with the question) - Related to previous point, the benchmark also conflates *memory* and *exploration* with the abductive reasoning challenges -- both of these are already huge challenges for RL/current models. I worry that these may make the benchmark too difficult, and additionally not actually test for the reasoning abilities that it claims to test. Overall, I just don't have a good understanding of what solving the benchmark actually entails - if we have agents with good memory + exploration, is the reasoning involved actually quite simple? I think heuristic baselines/agents with privileged information would help the case here, as well as simplifying aspects of the task that are orthogonal to abductive reasoning Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - table 1: what's the difference between embodied and open-world? - l146: what does a trace look like? are objects left behind, does the state change, etc.?. l151: when you say "32 distinctive traces" do you actually mean "32 object interactions" and traces can consists of multiple object interactions? - l153: "60 abductive reasoning tasks" - does this mean 60 question templates? - l164-168: what's an example of a path + alternative and why exactly does this require the detective to gather additional information? - why is the detective allowed to take the same actions as the vandal instead of just exploring the scene and collecting information? (feels like potentially unnecessary complexity) - l226-227: "masks are first employed to encode questions..." - what does this sentence mean? - How does the explorer condition on the question (e.g. some sentence embedding of the question or something else?) - Is there a step limit on the explorer's episodes? - l174-177: this made it sound like the detective and the agent were separate. My understanding is that the agent *is* the detective? - l255: what is the representation of symbolic map? - l307: to clarify, does the ideal explorer get the frames of the ground truth vandal trajectory, and then we extract every 30 frames? how much of the gap in performance is due to the keyframe extractor dropping important frames vs. the inaccuracy of the reasoning model? - sec 4.4: does this assume access to the vandal policy pi? Is that realistic? - l345: why would vanilla-trans be more susceptible to insufficient evidence? Isn't a more likely explanation that it's not pretrained to do question answering, in contrast to the other models? - AfD doesn't improve on standard (small improvements at best / mixed results across the board) - doesn't this suggest that this kind of reasoning is not super important in the benchmark? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer: > "have more examples about traces and reasoning steps" See pdf attached. > "Are the questions in Conan focused on testing abductive reasoning? Some heuristic exploration baselines" Good exploration, ie, the heuristic ideal explorer recovers the vandal's trajectory, is enough for "OK" performance, but to sqeeuze out the juice, one needs much better reasoner that can figure out the hidden states from of the vandal, from observed traces (see response to nDEB). Eg, cutting trees could both mean the vandal is in need of wood or food (apples) and from only traces of cut trees, it is not easy to figure it out. For "trace-relevant" frames or "key frames", the ideal explorer could already be understood as providing all trace-relevant frames. The notion of key frames is ill-defined. As in the video understanding community, one major challenge is how to extract "key frames". This is a post-hoc concept that can not be easily obtained beforehand. The common wisdom, and to improve efficiency (reduce context length in Transformer) is to trunck it via every k frame. For the exhaustive search agent, we show the results with an agent with full 64x64 grid observation here: 25.6(Vanilla-Trans), 64.6(F-BiLM-BERT), 69.1(F-BiLM-DeBERTa), 67.9(Flamingo-mini). The noisy sequence, with unoptimal temporal order (unlike the ideal explorere), makes the results slightly worse. > "conflates memory and exploration with abductive reasoning" Memory and exploration are other terms to describe the challenge. However, they are not all. Indeed, the agent needs to have a good memory, propose good possibilities and explore to check the possibility's validity. However, memory and exploration can only provide observation rather than states, as the difference between observation and state in POMDP, which remains challenge still. The agent needs to have reasoning-guided exploration, rather than exhaustive exploration, to quickly find frames that most likely lead to guess, and reason about the underlying hidden states based on the evidence gathered, as explained in the cut-tree example above and the examples in the PDF. Besides, experimentally, we note that with the ideal explorer, the reasoning is still not perfect. > "embodied vs open-world?" The difference is subtle. However, in our work, we'd like to emphasize the diverse and rich environment the agent could interact with, while simplifying actuator control, thus the wording. > "l146: what does a trace look like? l151: "32 distinctive traces" See pdf attached and supp. "32 distinctive traces" means that there are a total of 32 distinctive kinds of traces that can be left in the environment (see Figure A1). > "60 abductive reasoning tasks" No. Abductive reasoning tasks refer to tasks that Conan environment can support. > "l164-168: example of a path + alternative and why...?" Also see the attached pdf. > "why detective same actions" In the ideal case, the detective only needs to navigate. However, in some cases, the detective needs additional actions, such as meeting blocked roads with stones and needing to perform actions to make tools to break stones. Surely masking other actions will make it easier. > "l226-227: masks are first employed to encode questions...How does the explorer condition on the question?" Mask is used to guide the model's attention to the relevant parts of the playground observation. In questions like "Why does the vandal cut the tree", we need a pointer to what "tree" actually refers to. So we use a mask the same size of the map to highlight which tree the question refers to, and extract both the linguistic features and the visual features from the masked map as question encoding. > "step limit on the explorer?" Yes. 500. Discussed in line 326-327. > "l174-177: My understanding is that the agent is the detective?" Yes. > "l255: the representation of symbolic map?" It is a 2D array with dimensions [64, 64], where each number in the array represents the type of object in the position. > "l307: the ideal explorer get the frames of the ground truth vandal trajectory, and extract every 30 frames? Performance gap due to the keyframe extractor dropping vs. the inaccuracy of the reasoning model?" It's that the final downsampled length is 30 frames. That corresponds to every 6-9 frame extraction. The context window size is 9x9, which means that no information is lost in this process. > "sec 4.4: access to the vandal policy pi?" We did assume the detective shares the same policy with the vandal. It's not so realistic; however, it is also reasonable to assume to some extent that we share the same world model: we all know how to make swords or break the stones. But we admit that it's better to train the detective agent, say in traditional RL methods in the forward way to distill the world knowledge. The work explores this direction with a simple assumption and certainly has room for improvement. > "l345: vanilla-trans susceptible to insufficient evidence ... it's not pretrained to do question answering?" A very reasonable explanation. Its failure can be attributed to its weakness in reasoning based on incomplete observations. Will pretraining on other vqa tasks help solve Conan? We are willing to explore this possibility in the future work. > "AfD doesn't improve on standard" While the final performance is only silightly better. But compared with other methods, AfD is not directly supervised with relationship from traces and questions and answers like others. Instead, AfD is trained with P(S|O) model (state prediction from observation) and a policy model as in vandal. The final results are obtained from an explicit inference process. Considering this fact, AfD and other directly supervised models reaching similar performance and even slightly better should be considered significant results. [1] Sherlock [2] The scientist as child [3] Detecting blickets [4] Causal learning mechanisms in very young children --- Rebuttal Comment 1.1: Comment: Dear Reviewer yGBc, We wanted to confirm whether the concerns and questions you raised in your initial review have been successfully addressed in our responses and revisions. We truly value your insights and feedback. Please feel free to post additional questions and comments about our work during the author-reviewer discussion period.
Summary: The paper introduces Conan, an interactive open-world environment for evaluating the active reasoning abilities of agents. In Conan, agents need to answer questions by actively seeking for evidence and acquiring new knowledge in a setting of incomplete information. Conan is formulated as a detective game. First, during the initialization of the game, a vandal agent completes a task, leaving traces in the environment. Next, questions are generated for a detective agent to answer. The detective has to answer these questions by actively interacting with the environment to reconstruct the actions of the vandal. Strengths: * The environment differs from existing benchmarks for its active reasoning and interactive multi-round setting * The paper is very well written and easy to follow * Conan is an interesting environment that could spur further research on active reasoning in interactive contexts Weaknesses: * It is not clear how challenging the task is and how good the performance of the models reported in the paper is. A human study would help the reader understand this critical point. * A qualitative analysis of the games and the behavior of the different agents would also be needed in the main paper to understand when the agents make mistakes and to what extent the tasks are solvable * As mentioned in the paper, it looks like the explorer does not supply particularly relevant information to the reasoning model, substantiating the intuition that the bottleneck is there, rather than in the reasoning abilities of the model Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is a human study or an error analysis feasible? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors described the limitations of the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer: Thank you very much for your thoughtful and detailed review. We are pleased that you found this work interesting and unique from existing benchmarks. > "It is not clear how challenging the task is and how good the performance of the models reported in the paper is. A human study would help the reader understand this critical point." > "Is a human study or an error analysis feasible?" We have now uploaded a PDF file detailing the problem solving process in case you miss the challenges presented in the work. We kindly refer you to it for detailed explanation. Indeed, conducting a human study would be valuable. However, playing this particular game requires strong prior knowledge and familiarity with the game mechanics. Training is not easy, and it's hard to distinguish what leads to humans' failure in this task. As a result, we are still conducting the human study and would update the results in the discussion phase when we have preliminary results. Below we present the error analysis for the "goal" split. We examine the accuracy of Frozenbilm-deberta on various tasks, comparing two groups: reasoning based on the ideal explorer and the TRPO explorer. As mentioned in the paper, the ideal explorer is relatively oracle-like, as it not only gathers all the traces left by the vandal but also captures temporal information. Results show that in most cases the model answers more questions correctly on Ideal explorer than on TRPO explorer. Reasoning on ideal explorer fails when the questions cannot be directly "seen" from the environment and not found afterwards. For example, detectives can see a table but can not see what was made. On the other hand, reasoning on TRPO explorer struggles with long-term traces, indicating the fact that it can not cover all traces during exploration. Results from the reward also validate this (see response to Reviewer nDEB). We are going to add more error analysis into the supp. Thanks again for your suggestion! | **Tasks** | get_drink | defeat_cow | get_apple | make_stone_pickaxe | place_bed | place_furnace | defeat_zombie | make_stone_sword | | ----------- | --------------- | ---------------- | ----------------- | ------------------ | ----------- | --------------- | ----------------- | ---------------- | | **Trpo** | 47.06 | 43.90 | 35.7 | 48.48 | 43.90 | 44.44 | 52.50 | 37.50 | | **Ideal** | 100.00 | 85.37 | 78.57 | 39.39 | 87.80 | 83.95 | 75.00 | 4.17 | | **get_lava** | **defeat_skeleton** | **make_iron_sword** | **get_coal** | **get_beef** | **get_diamond** | **get_stone** | **place_table** | **get_wood** | | 50.00 | 46.59 | 56.25 | 50.00 | 53.85 | 40.62 | 36.84 | 39.36 | 36.00 | | 66.67 | 82.95 | 28.12 | 45.45 | 42.31 | 84.38 | 47.37 | 91.49 | 96.00 | | **make_bucket** | **get_iron** | **get_water** | **make_iron_pickaxe** | **make_bed** | **make_steak** | **make_wood_sword** | **make_wood_pickaxe** | | | 35.29 | 28.57 | 45.95 | 56.52 | 47.83 | 46.15 | 50.00 | 40.00 | | | 73.53 | 46.43 | 54.05 | 52.17 | 39.13 | 50.00 | 64.29 | 55.00 | | > "A qualitative analysis of the games and the behavior of the different agents would also be needed in the main paper to understand when the agents make mistakes and to what extent the tasks are solvable. > "As mentioned in the paper, it looks like the explorer does not supply particularly relevant information to the reasoning model, substantiating the intuition that the bottleneck is there, rather than in the reasoning abilities of the model." Thanks for your suggestion. We do consider adding more analysis and discussion about different agents and their results. We have included some qualitative analysis in Sec 5.2. Besides, some extra experiments has been conducted, see response to Reviewer nDEB for more information. To make it short, in the case of empty visual inputs, the reasoning performance is quite low, indicating that the agent is severely limited by the lack of visual information. The TRPO explorer shows a noticeable improvement compared to empty visual inputs, indicating that some exploration is better than none. However, the performance is still relatively low. The ideal explorer achieves significantly better performance, indicating that its ability to gather perfect trace evidence greatly benefits the downstream reasoning task. This highlights the importance of effective exploration. However, this is not to say the reasoning part can be considered solved. As can be seen in Table 3, even with an ideal explorer, the reasoner still could not well answer the questions. Also see response to Reviewer nDEB for a detailed discussion. --- Rebuttal Comment 1.1: Title: Thanks Comment: I appreciate the response of the author to address my comment. I increased my score by 1 point. --- Reply to Comment 1.1.1: Title: Response to Reviewer 2rwT Comment: We greatly appreciate your follow-up. We would also like to share some preliminary results from our human study. We recruited 10 participants from our subject pool and trained them within the proposed Conan environment. For the "goal" split, they averaged 110.0 exploration steps per question and answered 90.9% of the questions correctly. These findings indicate that humans exhibit more efficient exploration and better reasoning capabilities. We will include further details about this aspect in the revised version.
Summary: This paper proposes a benchmark for "active reasoning" titled Conan, where, instead of passively answering questions from provided context, an agent must interactively explore its environment to discover information. The authors differentiate such a task from so-called "passive reasoning" tasks such as video-language understanding where the visual input is directly fed into the model. Here, a model itself is responsible for exploring and attaining input that will help a model answer a question. Conan is implemented in a Minecraft-like gridworld where a "detective" agent must identify activities that a rule-based, goal-oriented "vandal" agent completed, by using traces of the vandals' behavior in the environment. E.g. if a vandal's goal was to make a wooden pickaxe, it may have cut some trees down, and the task for the model will be to answer why the vandal decided to make a wooden pickaxe. Two alternative methods are adopted to approach this task: one which uses an explorer agent, trained with an exploration reward, to generate evidence that is fed into a standard VLM, and another, "Abduction from Deduction", where one directly learns to predict the goal of the vandal from an inferred state trajectory. Overall this is an interesting paper, and I think the dataset and benchmark will be valuable for the community. The models tested for this environment are relatively simple, not really making full use of the "joint exploration and reasoning" abilities that Conan purportedly tests, however. Moreover I do have some outstanding questions, and some issues with the experiments (specifically a missing baseline) that prevent me from assigning a higher score. However I'm open to changing my score after the author response and discussion period. Strengths: - To my knowledge, the "active reasoning" component of Conan is an important area in vision/lang/RL, and is certainly underexplored in current embodied QA settings (although it is probably implicitly present in instruction following benchmarks and work like SayCan). - The dataset seems high quality and should be a useful contribution for the field. - Fairly sound experimental evaluation, comparing a variety of RL explorer agents with a variety of vision-language models. Weaknesses: - **Missing negative control baselines.** The experimental results should compare evidence gathered by the trained explorer agents to a weaker negative baseline, e.g. a random untrained explorer policy or no explorer input at all. This is needed to convincingly show that the explorer agents are actually learning to explore in a way that is more beneficial than chance, at least for the downstream QA task. - In general I'm definitely more interested in to what extent performance is bottlenecked by good exploration on this task, rather than fixing the explorer evidence and trying subtly-differently-trained large VLMs. - As mentioned by the in the conclusion (L376-377), one of the key promises of Conan is that strongest performance on this benchmark should intuitively be achieved by models that jointly learn to reason and explore, but right now a decoupled two-stage process is adopted—first training an exploration model with an oracle "find trace" reward, extracting relevant keyframes, then training a VLM on top of the frozen policy. As a result, models trained for this task (for now) look like typical embodied QA models doing passive reasoning (just with the extra step of training an explorer to get the vision input). Of course, it's not strictly necessary for a benchmark paper to include an strong novel model for the benchmark, but without it we aren't able to evaluate whether Conan performance numbers have **headroom** for more sophisticated approaches to joint exploration and reasoning. (It seems like this should be the case, but we can't really be sure). - Some more outstanding questions (see next section) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The idea of answering questions via "traces" left behind in the environment is a little hard to grasp and could use more explaining. There are some example traces in Figure 1b but these are small figures and aren't explained that much. The authors could spend some more time walking through example "traces" and more fully convince readers that the traces indeed provide answers to some of the more subtle questions in the benchmark (e.g. how do traces indicate that the vandals goal is to "get the diamond", couldn't the vandal have alternative goals like get iron/coal?). - It might be valuable to have an "oracle" setting to demonstrate that with perfect trace evidence and enough training models can 100% this task—or else explain why such an oracle doesn't exist (e.g. what causes the lagging performance even for the "Ideal Explorer" setting in Table 3? Is it an amount of training data issue? Are there still limitations as to the quality of the trajectory attained even by the ideal explorer, and is there an even more suitable oracle for this?) - Related to this, can authors further explain L308 ideal explorer: "visible to the ground-truth vandal’s trajectory"? What does "visible to" mean—shouldn't the ideal explorer literally *be* the ground-truth vandal's trajectory? - More details could be provided on the specific kinds of generalization splits tested here. At test time, is there a subset of abductive reasoning tasks held out that the model has never seen? Or does the model see the same questions it has seen during training, just in unseen environments? Do the test environments differ systematically from those during train (or is it possible to induce such a ) - Evaluating (and supporting) such compositional/systematic generalization splits would greatly increase the appeal of the benchmark. - (Related to the above) Do authors have an intuition as to whether test-time performance drop is more due to the VLM's inability to generalize, or the explorer's inability to generalize and generate good evidence for unseen enviroments (if test does indeed have unseen environments)? Is it possible to disentangle the two using different evaluation splits in the environment? - In Figure 4, does a score of near 100 as attained by eg TRPO indicate that the explorer indeeds find all traces (+100) reward all of the time? If not, is there an explicit measure of that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your thoughtful and detailed review. > "Missing negative control baselines." Thank you for your insightful feedback. We have tried our model with empty visual inputs as a negative control baseline as your suggestion and we show the results on "goal" split here: || Vanilla-Trans | F-BiLM-BERT | F-BiLM-DeBERTa | Flamingo-mini | |-| -|-| -|- | | Empty visual inputs | 26.4| 25.5| 25.9| 22.9| | Trpo explorer | 25.0 | 44.4| 43.1 | 43.3 | | Ideal explorer| 78.4| 59.5| 71.8| 47.8| The results show that using empty visual inputs yields random performance across all settings. Besides, it also shows that the training QA pairs are unbiased. The TRPO explorer achieves higher performance, which suggests that the exploration strategy learned by TRPO helps gather some informative evidence for the reasoning process. The ideal explorer is an oracle-like exploration policy that has access to perfect trace evidence and temporal information. It provides the most comprehensive information about the environment. This highlights the importance of effective exploration in improving reasoning performance. However, it does not mean that reasoning is less important, as even with the ideal explorer, the model still could not achieve satisfactory performance. Based on all results, collecting informative evidence seems to be more important in the overall objective. > "a decoupled two-stage process is adopted" > "more sophisticated approaches to joint exploration and reasoning" Due to the word limit, we kindly refer you to our response to Reviewer u9zC for a detailed discussion. > "answering questions via "traces" left is a little hard to grasp" > "some example traces in Figure 1b aren't explained that much" Thanks for your kind suggestion. We kindly refer you to the attached file for a simple demonstration of traces and reasoning steps. We are also considering adding more comprehensive examples in the supp. Thanks again for your suggestion! > "traces indicate that the vandals goal is to "get the diamond" or other alternative goals" For instance, if traces indicate that the vandal made a pickaxe, its intent could be to get iron or diamond according to the task dependency graph. As a result, the detective should continue to find more distinctive traces that can help exclude the wrong possibility, say checking the mine area. > “an "oracle" setting to demonstrate that with perfect trace evidence and enough training models can 100% this task” > "limitations of the ideal explorer traces" > "L308 visible to the ground-truth vandal’s trajectory” The ideal explorer provides the vandal's trajectory, while it cannot directly observe the actual vandal states when an action is taken, but only the traces the action left. This can introduce confounding factors, leading to potential confusion, necessitating the reasoning component in connecting the traces and figuring out the states. That's why we design Conan: it is important not only to explore the traces, but also to reason the hidden states. Agents must engage in abductive reasoning based on the traces they observe to reconstruct the entire story, which is indeed a challenging task. It could be a data problem if one takes on a data-driven perspective. The ideal explorer is good enough, so in this case we tend to believe it is the reasoner to blame. > "Is there a subset of abductive reasoning tasks held out that the model has never seen? Or does the model see the same questions it has seen during training, just in unseen environments? Do the test environments differ systematically from those during train?" > "Evaluating (and supporting) such compositional/systematic generalization splits" The test environments are disjoint from those during training, though they are sampled from the same distribution. But test questions are from the same distribution as training and could be similar. As the results show, even on IID questions, models still fare worse than expected. Therefore, as the first attemtp in active reasoning, we primarily focus on the current setting, and would like to see joint community efforts when it is time to introduce other generalization in the problem. > "Performance drop due to the VLM's inability to generalize, or the explorer's inability to generalize and generate good evidence for unseen environments? Possible to disentangle them?" We tend to believe the answer is complicated, but the intuition is that incomplete information plays the more critical role. Evidence above and in the paper suggests that compared to the reasoning component, the exploration component may be more important in order to achieve "OK" performance. But to reach perfection, the juice in the long tail can only be squeezed out by a good reasoner. Our current setting (test environments are unseen during training) shows that with the ideal explorer, performance could be greatly improved, meaning that the VLM could generalize but the explorer couldn't. > "Does a score of near 100 indicate that the explorer indeeds find all traces? Explicit measure?" Not exactly. As mentioned in section 4.1, the agent receives a step reward when it discovers more traces, which is intentionally designed to avoid sparse rewards. Additionally, if the agent successfully finds all traces left by the vandal, it will receive another 100 reward. Since there can be different environments and the vandal can leave long traces (note that there are steps with a reward of 2), the total step reward may vary slightly. We check statistics to explain this issue. For the TRPO explorer, it manages to find all traces in only about 1% of the environments. The ideal explorer achieves an average reward of 123.12 in 10000 envs. Treating it as the lower bound, the TRPO explorer is still imperfect. Careful analysis shows that TRPO only manages to find all traces in 1% of the environments. [1] Kurt Spencer. Noise!, 2014. URL https://github.com/KdotJPG/OpenSimplex2 --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks to authors for their detailed rebuttal. I appreciate the follow-up experiments and negative control baselines which verify that the models are indeed improving by using the image input. However I do think the random explorer baseline is also important, as it's possible to get some evidence by just randomly exploring (apologies if I did not sufficiently emphasize this in my initial review). Regardless, I think the other responses to the rebuttal also increase my confidence in the paper—I'll increase my score to a 7. --- Reply to Comment 1.1.1: Title: Response to Reviewer nDEB Comment: Thank you very much for your follow-up! We are pleased to provide some preliminary results while training our models with the random explorer. Also on the "goal" split, we have observed the following accuracy rates: Vanilla-Trans 25.9%, F-BiLM-BERT 35.8%, F-BiLM-DeBERTa 41.6%, and Flamingo-mini 38.0%. These results show that the performance with the random explorer is notably improved compared to scenarios with empty visual inputs. Interestingly, we also observe that while the random explorer greatly falls short of the ideal explorer, its performance is only slightly lower than that of the TRPO explorer. Notably, our agents are initialized at the starting point of the traces, which indicates that they can collect some traces around them through random exploration. This observation suggests that agents do indeed benefit from the presence of traces, although they may be collected randomly. Besides, while the trained TRPO explorer exhibits certain effectiveness, it still has a long way to go.
Summary: To address the gap in handling incomplete-information questions, this paper introduces an interactive open-world environment called "Conan." The purpose of Conan is to motivate and evaluate agents' active reasoning ability by requiring them to explore, gather evidence, and combine knowledge to solve complex scenarios. The ultimate goal is to improve AI agents' ability to interact with and learn from the world. Strengths: This paper introduces a new interactive open-world environment called "Conan." The purpose of Conan is to motivate and evaluate agents' active reasoning ability by requiring them to explore, gather evidence, and combine knowledge to solve complex scenarios. The ultimate goal is to improve AI agents' ability to interact with and learn from the world. Weaknesses: The author claims that the detective spawned in the environment needs to answer all questions through exploration, connecting key frames, and reaching conclusions. Yet, some questions presented in the Figures/Tables can be answered without exploring the environment. The process of answering questions through exploration requires more clarity and elaboration. Regarding the reward function used in the task, the author mentions that it incentivizes the agent to search for clues and traces related to the given question. However, the paper lacks detailed information about how the reward is designed and the specific motivation behind it. The writing appears to be cumbersome and illogical, making it difficult to grasp the actual meaning of the task. The purpose of introducing this new task is not clearly stated, leaving readers with questions about its significance and potential applications. In conclusion, while the paper introduces an intriguing concept of an interactive open-world environment for AI agents, there are aspects that require further clarification and refinement. Addressing the limitations and providing more detailed explanations about the task and reward design would enhance the overall understanding and impact of this research. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thank you very much for your review and positive rating! > "Some questions presented in the Figures/Tables can be answered without exploring the environment. " No. All of the questions have multiple possible answers and need to be inferred from traces in the environment. From the detective's view, he cannot directly "see" what happened in the environment. For example, when asked "Why did the vandal cut a tree? ", the detective needs to find traces before and after (especially after) to see whether the vandal needs some woods to make tools or just want to collect apples to eat. If the reviewer believes some questions could be answered without exploration, please point it out in the discussion phase and we are happy to help further address the concern. > "However, the paper lacks detailed information about how the reward is designed and the specific motivation behind it." We apologize for the ambiguity. As mentioned in Sec 4.1, the agent will receives a positive reward of 1 when a trace first appears within its local view or 2 when the trace is closely associated with the question. The agent receives a reward of 100 if it successfully finds all traces left by the vandal. Additionally, the agent incurs a penalty of -0.1 for every timestep it takes. By rewarding the agent for successfully identifying and following the traces, we foster behavior that mimics real-world detective work, where finding clues is a fundamental aspect of solving mysteries. > "The writing appears to be cumbersome and illogical, making it difficult to grasp the actual meaning of the task. The purpose of introducing this new task is not clearly stated, leaving readers with questions about its significance and potential applications." Thank you for your feedback on the writing. We apologize for any confusion. We will make the necessary improvements to address these concerns and provide a more concise and logical presentation. In a nutshell, the task is designed for capturing humans' active and exploratory nature in abductive reasoning in the real world and providing agents with an open-world environment which encourages active exploration and multi-round abductive inference based on in-situ gathered evidence and existing knowledge. The task goes beyond existing single-round passive tasks and contributes to the broader pursuit of building more intelligent and human-like AI systems. > "providing more detailed explanations about the task and reward design" Thanks for your kind suggestion. We kindly refer you to the attached file for a simple demonstration of traces and reasoning steps. We are also considering adding more comprehensive examples in the supp. Thanks again for your suggestion!
Rebuttal 1: Rebuttal: To all reviewers: We are sincerely appreciative and grateful for the time each of you have spent reading our work and giving useful, thoughtful and constructive feedback. The feedback is substantial and quite helpful for improving our paper. In particular, we would like to thank reviewers for acknowledging our work to be well-motivated (Reviewer u9zC, i6nS, nDEB, yGBc), high quality and should be a useful contribution (Reviewer nDEB), an interesting environment that could spur further research on active reasoning in interactive contexts (Reviewer 2rwT) and an ambitious benchmark trying to tackle an exciting and interesting problem (Reviewer yGBc). We are very glad to see that there is a general interest in such a topic and do hope that this work serves as a valuable contribution to the community. Pdf: /pdf/1518f2bb227ccc6f6014a326207d34dbfb3d3de4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces Conan, an interactive environment as a benchmark to evaluate agent’s active abductive reasoning abilities to answer questions in an incomplete (or partial) information scenario. Because of partial information, the model requires further exploration in the scene to answer the questions which is posed as a detective game. The paper also proposes the Abduction from Deduction (AfD) approach which relies on Bayesian statistics. The experimental evaluation showcases the strengths and weaknesses of various machine learning models in different settings. Strengths: - The paper is well-motivated and clearly written in most parts. - The authors promise the availability of corresponding code and a detailed description of hyperparameters which would be essential for reproducibility if open-sourced. - The paper introduces the AfD approach using Bayesian rules which is interesting to the research community. - Relevant baselines have been selected to comprehensively evaluate various RL and multi-modal models. Weaknesses: - The proposed environment looks like a toy benchmark with limited real-life applications. - Dataset statistics is missing in the work which might expose data biases related to goal, intents or survival questions. - The question templates are also limited and it is not clear if the model learns to take advantage of any data biases to answer the questions. It is also not clear if the questions are asked in a particular order. - This is framed as a multi-choice QA task with limited generalizability. - The proposed approach lacks a close integration between exploration and reasoning processes where the training and evaluation for different models is independent. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Can you elaborate on how this research can be improved to integrate exploration and reasoning processes in future work? - What was the main motivation behind maintaining the detective set invincible, ie. not maintaining the survival status on line 178? - On line 363, the authors mention that the proposed approach is akin to “human-like reasoning”. It is not clear as to how solving this toy task with an independent explorer and reasoning model brings us closer to “human intelligence” Suggestions: - It would help to make the best results in bold in Table 3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses for more information about the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thank you very much for your thoughtful review! We would like to discuss your concerns here: > "look like a toy benchmark with limited real-life applications" It is true that Conan is synthetic and may appear simplistic at first glance. However, the primary goal of Conan is not to present real-world complexity but rather to introduce the active reasoning setting to the community. Direct applications may not be straightforward; however, we believe the autonomocy in a successful Conan solver would be utterly desirable in our never-ending pursuit of AI. For example, we have many real world scenearios where we need informed search to draw conclusion, say new material discovery (like LK-99), causual discovery, or new drug synthesis. Though Conan is more about common sense, the general problem should be very similar. Besides, "toy" as it may seem, Conan, as Reviewer yGBc puts it, is already quite significant and non-trivial for AI systems to master. From our perspective, the environment already covers all the critical aspects in active reasoning. The toy-ish nature simplifies researchers' efforts to control and explore various aspects of the problem when attempting to address the difficulties without concerning about the real-world complexities, as the research history on CLEVR[1], CLEVRER[2] and EQA[3] shows (despite them being simpler than Conan). Successfully solving these tasks should be the grounds for developing more general and adaptable AI models in the future when we envision autonomous machine and human-machine collaboration. We believe that treating Conan as "toy" might overlook the true challenges and benefits it offers. > "Dataset statistics is missing" Thanks for the reminder. Please see dataset statistics below: |Category|Train|Test|Val|A|B|C|D| | -|-|-|-|-|-|-|-| |Intent|71162|9152|8822|24.99%|25.20%|24.89%|24.93%| |Goal|8000|1000|1000|24.89%|25.08%|24.87%|25.16% | |Suvival|7365|1560|1596|25.13%|24.95%|24.95%|24.97% | |Tasks|get_drink|defeat_cow|get_apple|make_stone_pickaxe|place_bed| place_furnace|defeat_zombie|make_stone_sword | |-|-|-|-|-|-|-|-|-| |Percentage|2.47|8.49|2.52|2.87|8.44 | 8.23 |7.8|2.65| |**get_lava**|**defeat_skeleton**|**make_iron_sword**|**get_coal**|**get_beef**|**get_diamond**|**get_stone**|**place_table**|**get_wood**| |2.72 |8.7|2.64|2.42|2.7|2.39|2.67|8.24|2.67| |**make_bucket**|**get_iron**|**get_water**|**make_iron_pickaxe**|**make_bed**|**make_steak**|**make_wood_sword**|**make_wood_pickaxe**|| |3.11|2.44|2.2|2.95|2.71|2.81|2.53|2.63|| We will release more detailed dataset statistics together with the benchmark. > "The question templates are also limited" > "if take advantage of any data biases / questions are asked in a particular order" Almost all VQA tasks do suffer from this issue as question templates are always limited. Our approach involved carefully crafting and curating a diverse set of question templates that cover various aspects of the task. Questions are not asked in a particular order. The choices are also randomly sampled from a pool and shuffled. Experiments have shown that without any visual input, the models only achieved random-level performance (see response to Reviewer nDEB), showing that templates do not introduce biases. > "framed as a multi-choice QA task with limited generalizability." Conan can be designed as an open QA task. However, we design Conan as a multi-choice QA task for easier and clearer evaluation. > "the main motivation behind maintaining the detective set invincible" The decision was driven by the design goal of focusing on active exploration and reasoning rather than adding to the challenge by survival. In this way, we can ensure that the main emphasis remains on solving complex reasoning tasks and answering questions related to the visual scenes provided. Besides, the task will be too hard if the detective may be dead (but it can be if you change the hyperparam). > "integrate exploration and reasoning processes" > "how an independent explorer and reasoning model brings us closer to “human intelligence” We acknowledge this limitation and appreciate your interest for improvement. Ideally, we should jointly optimize both exploration and reasoning components. As you said, it's more human-like and intelligent, for instance, the reasoner can provide feedback to the explorer, guiding it towards more informative exploratory actions that are likely to yield relevant traces. However, there are significant obstacles in practice (we tried but failed). Attributing credit to exploratory decisions that have long-term consequences can be complex. When exploration actions yield results much later, it becomes difficult for the model to understand the causal relationship between the exploratory decisions and their eventual impact on reasoning and answering questions. This highlights the interdependence between the exploration and reasoning processes. Improving one aspect requires advancements in the other, creating mutual dependency that complicates the optimization process. The reasoning component itself requires significant training and computational resources, especially when based on large language models. The need for substantial computational power makes the joint optimization of exploration and reasoning even more challenging. From a community perspective, it's also a common practice [3-5]. So we adopt this route. We believe future work on Conan shall prioritize reasoning over exploration: not to say that exploration is less important, but guided by a reasoning engine. Similar to some forms of (soft) tree search, we hope that a reasoner would propose possible subgoal and the explorer would explore the environment conditioned on the subgoal, and the entire process may be integrated in a MCTS-based learning framework like AlphaGo. [1] Clevr [2] Clevrer [3] Embodied question answering [4] Embodied question answering in photorealistic environments with point cloud perception [5] Iqa --- Rebuttal Comment 1.1: Comment: I have read the author's response and fellow reviewer's feedback. I believe that the authors have addressed most of my concerns. Particularly, I also liked the point raised by Reviewer nDEB and the author’s experiments related to the random baselines. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for your follow-up. We are glad to have more discussions if you have any further concerns.
null
null
null
null
null
null
Hierarchical Open-vocabulary Universal Image Segmentation
Accept (poster)
Summary: In this paper, the authors target open-vocabulary setting and propose a universal framework for open-vocabulary semantic/instance/panoptic segmentation. The whole framework is DETR-like. And to deal with the discrepancies between the thing and stuff classes, the authors ultilize independent decoders for thing and stuff classes. The authors benchmark their method on various datasets and achieve remarkable performance. Strengths: 1) The idea is simple and easy to understand. 2) The results seem promising. Weaknesses: 1) The idea seems incremental. Compared to UNITEXT, little new knowledge is introduced. The authors try to claim that their paper focuses on the open-vocabulary setting, while UNITEXT does not. However, they do not make it clear what prevents the extension of UNITEXT to the open-vocabulary setting, and their particular design for such setting. In fact, as noted on UNITEXT's github project homepage, the approach extends easily to open-vocabulary setting and achieves better performance than the proposed approach on the Seginw benchmark. 2) The comparison with other open-vocabulary semantic/panoptic segmentation methods is unfair. Since the authors used many datasets with much overlap with ADE20k-150 or Pascal Context 59 for training, but not in the previous methods, I suspect that the performance gain achieved by the proposed method comes from the training dataset. The authors should evaluate their method on a harder dataset (e.g., ADE20K-full) and perform an analysis of the overlap between their training dataset and the test data. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NaN Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your invaluable insights and thoughtful comments. In the following sections, we address the questions you have raised: **[W1] New knowledge introduced to UNINEXT and performance comparison with UNINEXT on open-vocabulary and part-segmentation benchmarks** \ We thank the reviewer for taking note of the open-vocabulary capabilities of UNINEXT, which was released after our paper submission deadline. However, it's crucial to highlight that despite UNINEXT's strong performance in instance segmentation tasks, such as SeginW, its architecture lacks the capacity to effectively execute panoptic and semantic segmentation involving stuff/background classes. As a result, it's incapable of tackling certain common open-vocabulary benchmarks like ADE-150 panoptic segmentation and ADE-Full semantic segmentation. In Table 4 of our main paper, we highlight the substantial performance gains achieved in closed-set scenarios. Moreover, in the table provided below (please refer to Table R1 for comprehensive results), we present the evaluation outcomes of UNINEXT when applied naively to ADE-150, ADE-Full and CTX-459 and SeginW benchmarks. Our approach surpasses UNINEXT in both AP, which assesses instance capability, PQ, which evaluates panoptic capability, and mIoU, which evaluates the semantic segmentation performance. |Method| Train Data | A-150 (PQ) | A-150 (APmask) | A-150 (APbox) | A-150 (mIoU) | A-847 (mIoU) | CTX-459 (mIoU) | SeginW (APmask) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | UNINEXT (H) | O365,COCO, RefCOCO | 8.9 | 14.9 | 11.9 | 6.4 | 1.8 | 5.8 | **42.1** | | HIPIE (H) | O365,COCO, RefCOCO | **22.9** | **19.0** | **22.9** | **29.0** | **9.7** | **14.4** | 41.6 | Additionally, we have conducted performance comparisons between UNINEXT and HIPIE, both trained within the hierarchical segmentation settings. Our evaluation is conducted on the validation sets of two datasets: COCO for Panoptic Segmentation and PAS-P for part-segmentation. | Method | Train Data | COCO (PQ) | COCO (APmask) | COCO (APbox) | COCO (mIoU) | PAS-P (mioUPartS) | | --- | --- | --- | --- | --- | --- | --- | | UNINEXT (H) | O365,COCO, RefCOCO, PAS-P | 37.3 | 60.1 | 49.9 | 21.3 | 52.0 | | HIPIE (H) | O365,COCO, RefCOCO, PAS-P | **58.0** | **61.3** | **51.9** | **66.8** | **63.8** | In summary, we achieve competitive, if not superior, performance to UNINEXT in instance segmentation, while significantly enhancing its performance in part-segmentation, open-vocabulary panoptic and semantic segmentation—a testament to the strides we've made in advancing the model's capabilities. We sincerely appreciate your valuable suggestions and feedback. We will incorporate all of these results into our revision. **[W2] Is there any overlap between the training dataset used and ADE-150 or Pascal Context? Could you evaluate your approach on more challenging datasets, like ADE20K-full, and provide an analysis of data overlap between your training dataset and the test data?** \ We genuinely appreciate your insightful question, which often goes overlooked in prior research. To the best of our knowledge, we are not aware of any substantial data overlap between the datasets we employ for training and ADE-150, as well as Pascal Context. It's noteworthy that these datasets are commonly utilized in a multitude of earlier works, including OpenSeed. In direct response to your question, we have included performance metrics on more rigorous datasets like ADE-full and Pascal-Context-459 in the Table below (please also see Table R1). Our approach exhibits superior performance compared to previous works that were trained under similar settings. |Method|Venue|Dataset|A-150 (PQ)|A-150 (APmask)|A-150 (APbox)|A-150 (mIoU)|A-847 (mIoU)|CTX-459 (mIoU)|SeginW (APmask)| | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | |OpenSeed|ICCV2023|O365,COCO|19.7|15.0|17.7|23.4|-|-|36.1| |X-Decoder|CVPR2023|COCO,CC3M,SBU-C,VG,COCO-Caption,(Florence)|21.8|13.1|-|**29.6**|9.2|**16.1**|32.2| |UNINEXT|CVPR2023|O365,COCO,RefCOCO|8.9|14.9|11.9|6.4|1.8|5.8|**42.1**| |HIPIE (ours)|-|O365,COCO,RefCOCO|**22.9**|**19.0**|**22.9**|29.0|**9.7**|14.4|41.6| Furthermore, our model's universality empowers it to effectively harness detection datasets such as Object365—an advantage unique to our architecture. This sets us apart from models like X-Decoder, which are constrained due to their decoder design, preventing the use of bounding-box-only datasets. Similarly, other methods like [1,2,4,5] exclusively concentrate on semantic segmentation and lack instance-awareness. We argue that our universality is a strength but weakness because training on multiple tasks benefits each other. *Hope our explanation and experiments are able to address your inquiries. Please don't hesitate to reply if you have any further concerns. We will integrate all your valuable suggestions into our revision, and open-source the code! Thank you!* --- Rebuttal Comment 1.1: Title: We genuinely appreciate the time and thought you invested in reviewing our paper Comment: Dear Reviewer 7gze, We genuinely appreciate the time and thought you invested in reviewing our paper. Your feedback has been incredibly valuable in enhancing the quality of our work. We're pleased to inform you that we have carefully addressed all the questions and concerns you raised in your reviews. Here's a brief summary of our actions: - **Clarifications on hierarchical segmentation**: We've included a ***diagram (Fig. R1 in the rebuttal PDF) to illustrate the essential differences*** from naively training the model on different granularities. Additionally, we've presented ***qualitative results in Fig. R2*** and ***quantitative results in Table R2***, effectively showcasing the benefits of our design choice. - **Comparison with works trained on Part-Segmentation datasets**: We've ***significantly boosted mIoUPartS by over 11.8 points*** in comparison to the UNINEXT baseline on the part-segmentation dataset (Please refer to Table R2 for detailed results). - **Results comparison with UNINEXT on SeginW**: [***Please note that the results of UNINEXT on SeginW were released in June 2023, which is after NeurIPS deadline.***] Despite UNINEXT's strength in instance segmentation tasks, such as SeginW, ***UNINEXT's architecture lacks the capacity to execute panoptic and semantic segmentation*** effectively. In Table R1, we present the evaluation outcomes of UNINEXT when applied naively to ADE-150, ADE-Full, CTX-459, and SeginW benchmarks. ***Our approach outperforms UNINEXT by a large margin in instance segmentation, panoptic segmentation, and semantic segmentation.*** - **Overlap between training dataset and ADE-150 or Pascal Context**: We want to highlight that ***these datasets are commonly used in earlier works***, including OpenSeed. Additionally, we ***are not aware of any significant data overlap*** between the datasets we use for training and ADE-150, as well as Pascal Context. Your insights have significantly contributed to refining our work, and we believe the paper has greatly benefited from your expertise. ***If you have any further questions or suggestions, please don't hesitate to reach out!!*** We're eager to provide any additional clarifications needed! We look forward to continuing discussions that will enrich the revision of our paper. Warm regards, Paper 384 Authors
Summary: This paper presents HIPIE, an open vocabulary image segmentation model that produces segmentation from text prompts. The authors propose to decouple the segmentation of “thing” and “stuff” due to the differences in their semantic and geometric properties. By training on an additional part-level dataset, HIPIE can also perform part-level segmentation. The authors perform experiments on several open-vocabulary segmentation and referring image segmentation datasets and achieve better performances than previous state-of-the-art methods. Strengths: - The decoupling of thing and stuff decoding makes sense because of the different feature distributions between the thing classes and the stuff classes. The authors have also experimentally verified that decoupling helps in quantitative measures. - The proposed method has strong performance on several popular datasets including COCO panoptic segmentation, ADE20K, the referring COCO dataset, etc. - The authors show that the proposed method can also work on part-level segmentation after being trained on part segmentation datasets. They show that the proposed method works better than Grounding DINO on at least one example in part segmentation which is an interesting application. Weaknesses: - The paper claims a hierarchical representation which I find weak. There is only a small paragraph (Section 3.7) about hierarchical segmentation, and it is about part-level segmentation. From the text, it seems like the hierarchy is embedded in the text prompt (e.g., a head consists of ears, hair, etc.) – which any model that works on a text prompt is already capable of incorporating. The proposed model itself is not hierarchical. One thing that the proposed model does while prior work doesn’t is that it trains on part segmentation dataset (Pascal-Panoptic-Parts). However, it seems that the authors have only tested part segmentation performance on the same dataset, hence not demonstrating the “open vocabulary” capability. Overall, I find the claim of “hierarchical model” as a contribution confusing. - Most of the performance improvements seem to come from decoupling. It appears to me that decoupling leads to extra model parameters and run-time because there are two decoders to run for each input image. There are no comparisons with recent models in these two regards. - It is unclear why “feature fusion” should not be performed for the “stuff” branch. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Table 3 – what is the difference between the top part and the bottom part? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors present a section for limitations (Section A5) in the supplementary material. It points to several future work directions rather than limitations of the proposed model. One potential limitation is that the part-level segmentation might not generalize to vocabulary beyond the training set. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your invaluable insights and thoughtful comments. In the following sections, we address the questions you have raised: **[W1] How does the hierarchical segmentation process occur in the proposed model? Is the model inherently hierarchical in its architecture? How to demonstrate the “open vocabulary” capability for part-segmentation?** Nice questions! Sorry for the confusion! We have indeed integrated unique designs tailored to hierarchical segmentation. In our efforts to elucidate the pivotal differences from previous methods and from naively training the model on part-segmentation datasets, we have included illustrative diagrams in the rebuttal PDF. Specifically, we concatenate class names from various hierarchical levels and contrast a mask embedding with these labels within the training loss. To illustrate, consider the example of "person head", we establish positive targets for both "person" and "head" individually, while designating negative targets for all other class names. This approach starkly contrasts with the outcomes of naively applying alternative methods, where "person head" might unintentionally garner negative targets from classes like "person body" or "person eye." Instead of treating each class name as an ordinary multi-word class label, our design uniquely captures the hierarchical nature of the underlying semantics. At inference, we run the same image once for each level of hierarchy and combine the final outputs. ***In Figure R1***, we visually articulate the design disparities when compared to methods like UNINEXT and ODISE. ***In Figure R2***, we show that our design benefits open-vocabulary settings and allows zero-shot inference for object parts on novel concepts. Additionally, we have conducted performance comparisons between UNINEXT and HIPIE, both trained within the hierarchical segmentation settings. Our evaluation is conducted on the val sets of two datasets: COCO for Panoptic Segmentation and PAS-P for Part-Segmentation. |Method|Train Data|COCO (PQ)|COCO(APmask)|COCO (APbox)|COCO (mIoU)|PAS-P (mioUPartS)| |-|-|-|-|-|-|-| |UNINEXT (H)|O365,COCO,RefCOCO,PAS-P|37.3|60.1|49.9|21.3|52.0| |HIPIE (H)|O365,COCO,RefCOCO,PAS-P|**58.0**|**61.3**|**51.9**|**66.8**|**63.8**| In the context of open-vocabulary part-segmentation, we acknowledge that a quantitative assessment of the open vocabulary capability is hindered by the limited availability of part-segmentation datasets, where most datasets have similar classes. However, we provide qualitative analysis in Figure R2, where we showed that our design benefits open-vocabulary settings and allows zero-shot inference of parts on novel objects. We are committed to further advancing our research in open-vocabulary part-segmentation. In future endeavors, we plan to label a new dataset and release it for the evaluation of open-vocabulary part-segmentation. **[W2] It appears to me that decoupling leads to extra model parameters and run-time because there are two decoders to run for each input image. There are no comparisons with recent models in these two regards.** Another nice question! Compared with UNINEXT, we introduce 30M more parameters, or 4% increase in total parameters (805M vs 775M). In terms of inference speed, our model leads to a small performance loss on A100 (1.31s vs 1.42s per iteration). However, given the observed performance gain and new task capabilities, we believe such cost is justifiable. **[W3] It is unclear why “feature fusion” should not be performed for the “stuff” branch** Regarding the decision to refrain from "feature fusion" within the "stuff" branch, we would like to reference the analysis detailed in Section 1 (Lines 35-64) and Figure 2 of our paper. Specifically, in Figure 2, we observed that: 1) Noticeable discrepancies exist in the between class similarities of textual and visual features between stuff and thing classes. 2) Stuff classes exhibit significantly higher levels of similarity in text features than things. This observation suggests that integrating textual features may yield more significant benefits in generating discriminative features for thing classes compared to stuff classes. Consequently, for thing classes, we adopt an early image-text fusion approach to fully leverage the benefits of discriminative textual features. Furthermore, we have conducted empirical validation on MSCOCO, as presented in the table (copied from Table 4 in our paper), which underscores the superiority of our design in comparison to alternative design choices. | Method | PQ | APmask | mIoU | | ------ | --------- | --------- | --------- | | Baseline - Fig. 4a | 44.6 | 42.5 | 66.8 | | Decoupled (Fusion: Stuff + Things) - Fig. 4b | 50.0 | **44.4** | 77.1 | | Decoupled (Fusion: Things) - Fig. 4c | **51.3** | **44.4** | **77.3** | *Hope our explanation and experiments are able to address your inquiries. Please don't hesitate to reply if you have any further concerns. We will integrate all your valuable suggestions into our revision, and open-source the code! Thank you!* --- Rebuttal Comment 1.1: Title: We truly appreciate your dedicated time and thoughtful review of our paper Comment: Dear Reviewer 4C24, We truly appreciate your dedicated time and thoughtful review of our paper. Your feedback has been immensely valuable in enhancing the quality of our work. We're pleased to share that we have thoroughly addressed all the questions and concerns you raised in your reviews. Here's a concise summary of our actions: - **Clarifications on hierarchical segmentation**: We've included **a diagram (Fig. R1 in the rebuttal PDF) to illustrate the essential differences** from naively training the model on different granularities. Moreover, we've presented ***qualitative results in Fig. R2*** and ***quantitative results in Table R2***, effectively showcasing the benefits of our design choice. - **Comparison with works trained on Part-Segmentation datasets**: We've significantly ***boosted mIoUPartS by over 11.8 points*** in comparison to the UNINEXT baseline on the part-segmentation dataset (Please refer to Table R2 for detailed results). - **Impact of decoupling on model parameters and run-time**: We've addressed this concern by highlighting that ***the total parameters increased by only 4%*** compared to UNINEXT (805M vs 775M), leading to ***substantial performance gains*** and the ***introduction of new task capabilities***. Additionally, we've attended to other minor questions and aspects as needed. Your insights have greatly contributed to refining our work, and we believe the paper has significantly benefited from your expertise. ***If you have any further questions or suggestions, please don't hesitate to reach out!!!*** We're eager to provide any additional clarification needed! We look forward to continuing discussions that will enrich the revision of our paper. Warm regards, Paper 384 Authors
Summary: The paper proposes a unified method for open-vocabulary universal image segmentation and detection methods. A text-image fusion module takes both the image features and text features and then sends the fused results to the decoder. Several designed choices are presented and compared here. The model utilizes the dataset of Objects365, COCO, RefCOCO, RefCOCOg and RefCOCO+ for training and then test on different benchmarks. Strengths: 1. The overall method is clear and easy to understand. 2. The motivation to unify all the open-vocabulary segmentation and detection tasks is good. Weaknesses: 1. The model is first pretrained on Object365 which has over 600K well-labeled images and then finetuned on COCO, RefCOCO, RefCOCOg and RefCOCO+. Previous open-vocabulary image segmentation methods like [1-3] only trains on COCO dataset. I don't think the comparison is fair as the model has seen much more well-labeled datasets. 2. Since the hierarchical segmentation only requires the change of text prompts, I think all previous work can also perform on this task. Maybe the authors need to tone down the hierarchical thing and add some comparison with previous work. 3. The paper is titled open-vocabulary, however, lots of results in the paper are actually not such as the COCO results in Table 2, 5, 6. It's ok to report the results, but the authors need to report the open-vocabulary results which should be the main focus. 4. Some important comparisons are missing [4, 5] on open-vocabulary semantic segmentation. [1] Scaling Open-Vocabulary Image Segmentation with Image-Level Labels.\ [2] Open-vocabulary panoptic segmentation with maskclip.\ [3] Open-vocabulary panoptic segmentation with text-to-image diffusion models.\ [4] Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.\ [5] Side Adapter Network for Open-Vocabulary Semantic Segmentation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Table 3 seems confusing, why do the authors split the table into two parts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful inquiries, and we will provide detailed responses to each of them below: **[W1] Concern about HIPIE's pre-training on Object365, which is unfair.** With regard to the use of large datasets, we'd like to highlight that our research is centered on universal models capable of addressing multiple tasks within a unified framework. As a result, we've trained our model on various datasets that collectively support these target tasks. This approach is aligned with recent endeavors such as OpenSeed (ICCV2023), GroundingDINO (arXiv2023), and X-Decoder (CVPR2023), which actually have used datasets of considerably larger scales than ours. In the table provided below, we present a comprehensive comparison against con-current works that have been trained under comparable settings. Importantly, our work shows a remarkable performance superiority and obtains significant outperformance of the UNINEXT baseline—a model that also holds universal capabilities. |Method|Venue|Dataset|A-150 (PQ)|A-150 (APmask)|A-150 (APbox)|A-150 (mIoU)|A-847 (mIoU)|CTX-459 (mIoU)|SeginW (APmask)| | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | |OpenSeed|ICCV2023|O365,COCO|19.7|15.0|17.7|23.4|-|-|36.1| |X-Decoder|CVPR2023|COCO,CC3M,SBU-C,VG,COCO-Caption,(Florence)|21.8|13.1|-|**29.6**|9.2|**16.1**|32.2| |UNINEXT|CVPR2023| O365,COCO,RefCOCO|8.9|14.9|11.9|6.4|1.8|5.8|42.1| |HIPIE (ours)|-| O365,COCO,RefCOCO|**22.9**|**19.0**|**22.9**|29.0|**9.7**|14.4|**41.6**| In contrast to prior open-vocabulary segmentation methods, our HIPIE harnesses a unique advantage through the universality inherent in our model, which empowers us to effectively exploit both segmentation and detection datasets such as Object365. Prior works such as [3] and X-Decoder cannot use bounding-boxes-only datasets because of their decoder design. Other works such as [1,2,4,5] focus on semantic segmentation and do not have instance awareness. **[W2] Maybe the authors need to tone down the hierarchical thing and add some comparison with previous work.** Sorry for the confusion! We have indeed integrated unique designs tailored to hierarchical segmentation. In our efforts to elucidate the pivotal differences from previous methods and from naively training the model on part-segmentation datasets, we have included illustrative diagrams in the rebuttal PDF. Specifically, we concatenate class names from various hierarchical levels and contrast a mask embedding with these labels within the training loss. To illustrate, consider the example of "person head", we establish positive targets for both "person" and "head" individually, while designating negative targets for all other class names. This approach starkly contrasts with the outcomes of naively applying alternative methods, where "person head" might unintentionally garner negative targets from classes like "person body" or "person eye." Instead of treating each class name as an ordinary multi-word class label, our design uniquely captures the hierarchical nature of the underlying semantics. At inference, we run the same image once for each level of hierarchy and combine the final outputs. **In Figure R1**, we visually articulate the design disparities when compared to methods like UNINEXT and ODISE. **In Figure R2**, we show that our design benefits open-vocabulary settings and allows zero-shot inference for object parts on novel concepts. Additionally, we have conducted performance comparisons between UNINEXT and HIPIE, both trained within the hierarchical segmentation settings. Our evaluation is conducted on the val sets of two datasets: COCO for Panoptic Segmentation and PAS-P for Part-Segmentation. |Method|Train Data|COCO (PQ)|COCO(APmask)|COCO (APbox)|COCO (mIoU)|PAS-P (mioUPartS)| |-|-|-|-|-|-|-| |UNINEXT (H)|O365,COCO,RefCOCO,PAS-P|37.3|60.1|49.9|21.3|52.0| |HIPIE (H)|O365,COCO,RefCOCO,PAS-P|**58.0**|**61.3**|**51.9**|**66.8**|**63.8**| **[W3] Open-vocabulary results should be the main focus** We thank the reviewer for pointing out the importance of open-vocabulary performance. The closed-set results shown in Table 2, 5, and 6 primarily underscore the universality of our model, a core contribution we emphasize. This universal capability is particularly noteworthy, considering that many prior works like X-Decoder and ODISE lack the capacity to perform detection and referring expression comprehension due to their decoder designs. In addition to the open-vocabulary results reported in the main paper, we also provide more results on Object Detection in the Wild (ODinW) and Segmentation in the Wild (SeginW) in our appendix and we obtain significantly better results. Furthermore, we provide more open-vocabulary results in the first Table (rebuttal PDF) with ADE-full and Pascal-Context-459 featuring many novel classes. Our model achieves competitive performance in these datasets as well. **[W4] Comparisons with [4, 5] are missing on open-vocabulary semantic segmentation.** We will add it in the revision! We can outperform [4] (+0.7 mIoU on ADE-full and + 2.1 mIoU on CTX-459) and we do not surpass [5]. However, it's important to highlight that these models are designed specifically for semantic segmentation, whereas our model's capabilities encompass detection, instance segment, referring segment, and part segmentation as well. Also, these models rely on the COCO-Stuff dataset, which has more stuff classes compared to the COCO-Panoptic dataset that we employ, while our current setting is more favorable for object detection and instance segmentation. We intend to explore the implications of the COCO-Stuff dataset in the revision. **[Questions]** The first part of Table 3 presents a comparison with methods using RN50 as their backbone, while the second part lists models utilizing larger backbones. We will fix it in the revision. *Hope our explanation and experiments can address your inquiries. We will integrate all your valuable comments into our revision!* --- Rebuttal Comment 1.1: Title: We sincerely thank you for your time and effort invested in reviewing our paper Comment: Dear Reviewer EcWi, We sincerely thank you for your time and effort invested in reviewing our paper. Your feedback has proven to be invaluable in elevating the quality of our work. We're delighted to inform you that we've diligently addressed each of the questions and concerns you raised in your reviews. Here's a brief summary of our actions: - **HIPIE's pre-training on Object365**: To ensure a fair comparison, we've conducted a comprehensive evaluation (Table R1) against con-current works, including OpenSeed (ICCV2023), GroundingDINO (arXiv2023), and X-Decoder (CVPR2023), which employ datasets of notably larger scales than ours. We're thrilled to report that our HIPIE model ***outperforms not only the UNINEXT baseline but also these con-current works***, showcasing its significant capabilities. - **Clarifications on hierarchical segmentation**: To enhance clarity, we've included ***a diagram (Fig. R1 in the rebuttal PDF) illustrating the key distinctions*** from naively training the model on different granularities. Additionally, we've provided ***qualitative results in Fig. R2*** and ***quantitative results in Table R2***, demonstrating the advantages of our design choice. - **Comparing with UNINEXT trained on Part-Segmentation datasets**: We've significantly ***improved mIoUPartS by over 11.8 points*** compared to the UNINEXT baseline on the part-segmentation dataset. (Please see Table R2 for details) - **More open-vocabulary results**: In addition to the extensive evaluation results (***over 40 datasets***) presented in our main paper and appendix, we've included ***additional results on A-150, A-847, CTX-459, and SeginW*** in the rebuttal PDF. Notably, we've achieved state-of-the-art results on most of these benchmarks, highlighting our model's competitiveness, even when compared to con-current works. Furthermore, we're committed to addressing other minor questions related to table design. Your input has been instrumental in shaping our revisions, and we're confident that the paper has greatly benefited from your expertise. ***If you have any further questions or suggestions, please don't hesitate to share them!!!*** We will be happy to answer them! We eagerly anticipate further discussions that will enrich the revision of our paper. Warm regards, Paper 384 Authors
Summary: The paper propose to disentangle the representation learning and decoding for things and stuff and unify multiple segmentation tasks with different granularity (whole, part, subpart) and text formulation (reference text or category only). Extensive experiments are carried out to validate its effectiveness. Strengths: 1. The paper proposed to unifying different segmentation tasks and benchmark them, which is a promissing direction for future researches. 2. The decoupling of things and stuffs in representation learning are intuitive. 3. The experimental results are extensive and promissing. Weaknesses: 1. As the model only uses fully supervised data for supervision without large-scale image-text pretraining, whether its open-vocabulary capability is enough to be claimed as a open-vocabulary model? I am looking forward to see more open-vocabulary results that a quite different with exisitng training vocabulary with and without the assistance of CLIP for inference. 2. It seems that the architecture designs (decoupling thing and stuff) of this work do not feasible to the whole/part/subpart seg. If previous methods such as UNINEXT, X-Decoder, SEEM are also trained with part segmentation dataset (e.g., Pascal-Panoptic-Part). Can it already provide a solid results for such unified segmentation? Better with some experimental results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Generally, I appreciate the setting of this work, but still have some concerns as detailed in the weakness part. If the author can address my concern, I am willing to upgrade my rating. Besides, I hope the author can have a detailed introduction about how to do the hierarchical segmentation in 3.7, you need to do the segmentation multiple times or just once? If the text query has both whole, part and subpart categories, how will the model do the inference. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your invaluable insights and thoughtful comments. In the following sections, we address the questions you have raised: **[W1] Open-vocabulary results with and without the assistance of CLIP for inference** Nice question! We sincerely appreciate the reviewer for highlighting the absence of extensive large-scale image-text pre-training. This gap indeed serves as one of the motivations behind our integration of CLIP assistance (it is also a common practice). To answer your question in full, we argue that the extensive labels in detection dataset and language expression in RefCOCO/+/g can also help the model align the representations of image and texts. This alignment, in turn, contributes to our model's capability to achieve open vocabulary segmentation without using CLIP. We provide additional results in the table below. In comparison to other universal models, our model, when integrated with CLIP, achieves the state-of-the-art performance in ADE-full benchmark with 847 classes. Moreover, the performance exhibited by HIPIE, even in the absence of CLIP, surpasses the UNINEXT baseline by a significant margin. This outcome underscores our model's commendable open vocabulary capability, even without reliance on CLIP assistance. |Method|Venue|Dataset|A-150 (PQ)|A-150 (APmask)|A-150 (APbox)|A-150 (mIoU)|A-847 (mIoU)|CTX-459 (mIoU)|SeginW (APmask)| | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | |OpenSeed|ICCV2023|O365,COCO|19.7|15.0|17.7|23.4|-|-|36.1| |X-Decoder|CVPR2023|COCO,CC3M,SBU-C,VG,COCO-Caption,(Florence)|21.8|13.1|-|**29.6**|9.2|**16.1**|32.2| |UNINEXT|CVPR2023|O365,COCO,RefCOCO|8.9|14.9|11.9|6.4|1.8|5.8|42.1| |HIPIE w/o CLIP (ours)|-|O365,COCO,RefCOCO|18.1|16.7|20.2|19.8|4.8|12.2|41.0| |HIPIE w/ CLIP (ours)|-|O365,COCO,RefCOCO, (CLIP) |**22.9**|**19.0**|**22.9**|29.0|**9.7**|14.4|**41.6**| **[W2 and Questions] How to do hierarchical segmentation? If previous methods such as UNINEXT, X-Decoder, SEEM are also trained with part segmentation dataset (e.g., Pascal-Panoptic-Part). Can it already provide solid results for such unified segmentation?** We are sorry for the confusion caused by insufficient architectural description. We have indeed incorporated distinct designs specifically tailored for hierarchical segmentation. In our efforts to elucidate the pivotal differences from previous methods and from naively training the model on part-segmentation datasets, we have included illustrative diagrams in the rebuttal PDF. Specifically, we concatenate class names from various hierarchical levels and contrast a mask embedding with these labels within the training loss. To illustrate, consider the example of "person head", we establish positive targets for both "person" and "head" individually, while designating negative targets for all other class names. This approach starkly contrasts with the outcomes of naively applying alternative methods, where "person head" might unintentionally garner negative targets from classes like "person body" or "person eye." Instead of treating each class name as an ordinary multi-word class label, our design uniquely captures the hierarchical nature of the underlying semantics. At inference, we run the same image once for each level of hierarchy and combine the final outputs. In Figure R1, we visually articulate the design disparities when compared to methods like UNINEXT and ODISE. In Figure R2, we show that our design benefits open-vocabulary settings and allows zero-shot inference for object parts on novel concepts. Additionally, we have conducted performance comparisons between UNINEXT and HIPIE, both trained within the hierarchical segmentation settings. Our evaluation is conducted on the val sets of two datasets: COCO for Panoptic Segmentation and PAS-P for Part-Segmentation. |Method|Train Data|COCO (PQ)|COCO(APmask)|COCO (APbox)|COCO (mIoU)|PAS-P (mioUPartS)| |-|-|-|-|-|-|-| |UNINEXT (H)|O365,COCO,RefCOCO,PAS-P|37.3|60.1|49.9|21.3|52.0| |HIPIE (H)|O365,COCO,RefCOCO,PAS-P|**58.0**|**61.3**|**51.9**|**66.8**|**63.8**| *Hope our explanation and experiments are able to address your inquiries. Please don't hesitate to reply if you have any further concerns. We will integrate all your valuable suggestions into our revision, and open-source the code! Thank you!* --- Rebuttal Comment 1.1: Title: We sincerely appreciate your time and effort in reviewing our paper! Comment: Dear Reviewer YNZH, We want to express our sincere gratitude for the time and effort you've dedicated to reviewing our paper. Your feedback has been invaluable in elevating the quality of our work. We're pleased to inform you that we've taken great care in addressing each of the questions and concerns you raised in your reviews. Here's a summary of our actions: - **Open-vocabulary results without CLIP**: We're excited to report that our HIPIE model, even without CLIP, ***outperforms the UNINEXT baseline and other con-current works***, such as OpenSeeD (ICCV2023), ***by a significant margin***. The results presented in Table R1 underscore our model's remarkable open vocabulary capability, demonstrating that it stands strong even without relying on CLIP assistance. - **Clarifications on hierarchical segmentation**: To provide better clarity, we've incorporated ***a diagram (Fig. R1 in the rebuttal PDF) that highlights the key differences*** from naively training the model on different granularities. Additionally, we've included ***qualitative results in Fig. R2*** and ***quantitative results in Table R2***, confirming the benefits of our design choice. Particularly, we've ***achieved an impressive increase of over 11.8 points in mIoUPartS*** compared to the UNINEXT baseline on the part-segmentation dataset. - **More open-vocabulary results**: In addition to the extensive evaluation results (***on over 40 datasets***) we provided in our main paper and appendix, we've included ***more results on A-150, A-847, CTX-459, and SeginW*** in the rebuttal. Notably, we've achieved state-of-the-art results on most of these benchmarks, showcasing our model's competitiveness (even comparing to con-current works)! We genuinely appreciate your insightful input, which has greatly influenced our revisions, and we're confident that the paper has significantly improved as a result. ***If you have any further questions or suggestions, we welcome your input!!!*** Your continued engagement is immensely valuable, and we look forward to ongoing discussions that will further enhance the revision of our paper. Warm regards, Paper 384 Authors --- Rebuttal 2: Comment: Thanks the explanation by authors. After reading the results in the rebuttal stage, my main concerns have been addressed. **Some suggestions**: 1. I think the authors should spend more effort on the paper writing for better understanding, and the explanation in rebuttal should be added into the paper during the next revision. I seems that there are similar confusions among reviewers in the pre-rebuttal version. 2. Besides, I encourage the author to set different subsections in the experiments part to should the *Universal*, *Open-vocabulary* and * Hierarchical* capabilities in the next revision. I believe this will help readers to better understand your contributions. 3. About the open-vocabulary part, I am wondering what is performance if the model is trained with more language-text paird datasets such as LION. Have authors tried that? Regarding the results provided in the rebuttal stage. I will upgrade my score to weak accept. I hope the authors could include the above-mentioned results in the next revision. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, We are genuinely appreciative of your decision to upgrade the score to a weak accept! Your discerning feedback will undoubtedly shape the direction of our upcoming revision. We are dedicated to carefully incorporating your suggestions to enhance the quality of the paper writing and presentation. Regarding your question about the model's performance when trained with additional text-image paired datasets such as LAION, we highly value this suggestion! We have solid intentions to explore this avenue in our upcoming research endeavors, and we are actively involved in conducting these experiments (utilizing larger image-text pair datasets often requires substantial training time). It's worth mentioning that our preliminary experiments have indicated the potential for incorporating LAION to enhance open-vocabulary segmentation performance, especially within the context of semantic segmentation. Once again, ***we extend our heartfelt appreciation for your positive feedback and great suggestions. Your decision to upgrade our score is deeply appreciated.*** Wishing you a fantastic day and weekend ahead! Best regards, Authors of Paper 384
Rebuttal 1: Rebuttal: We extend our gratitude to the reviewers for their valuable feedback. Their insights have significantly enriched our work. We are heartened by YNZH's recognition of our paper, where YNZH highlights the significance of our approach in *"unifying different segmentation tasks and benchmark them, which is a promising direction for future researches"*. We appreciate EcWi's endorsement that *"the motivation to unify all the open-vocabulary segmentation and detection tasks is good"*. We are pleased to acknowledge 4C24 and 7gze's observations that *"the proposed method has strong performance on several popular datasets"* and *"the results seem promising"*. Furthermore, we deeply appreciate 4C24's comment that *"the decoupling of thing and stuff decoding makes sense because of the different feature distributions between the thing classes and the stuff classes"*. We will integrate all valuable suggestions into our revision, and open-source the code. In this section, we commence by tackling the concerns that have been collectively raised. These shared concerns correspond to the three keywords in the title: **[Hierarchical] What is the architecture design catering specifically to hierarchical segmentation and how does it compare with naively running previous methods on part segmentation dataset?** Sorry for the confusion! We have indeed integrated unique designs tailored to hierarchical segmentation. In our efforts to elucidate the pivotal differences from previous methods and from naively training the model on part-segmentation datasets, we have included illustrative diagrams in the rebuttal PDF. Specifically, we concatenate class names from various hierarchical levels and contrast a mask embedding with these labels within the training loss. To illustrate, consider the example of "person head", we establish positive targets for both "person" and "head" individually, while designating negative targets for all other class names. This approach starkly contrasts with the outcomes of naively applying alternative methods, where "person head" might unintentionally garner negative targets from classes like "person body" or "person eye." Instead of treating each class name as an ordinary multi-word class label, our design uniquely captures the hierarchical nature of the underlying semantics. At inference, we run the same image once for each level of hierarchy and combine the final outputs. ***In Figure R1***, we visually articulate the design disparities when compared to methods like UNINEXT and ODISE. ***In Figure R2***, we show that our design benefits open-vocabulary settings and allows zero-shot inference for object parts on novel concepts. ***In Table R2***, we also empirically evaluated the results, which affirm our model's superior performance compared to the UNINEXT baseline trained on part datasets. **[Universal] What novel insights or knowledge does your approach introduce when contrasted with UNINEXT and other preceding universal models?** In this paper we consider the scope of all interesting scene understanding tasks consisting of Object Detection, Instance Segmentation, Part-segmentation, Semantic Segmentation, Panoptic Segmentation, Referring Segmentation and Referring Expression Comprehension. While there are many existing works approaching this objective, we are the first model capable of performing all these tasks. In contrast to earlier methods, it's important to note that UNINEXT lacks the capacity for panoptic and semantic segmentation and exhibits suboptimal performance when directly applied to such tasks. X-Decoder's limitations arise from its decoder not being optimized for bounding box-based learning, rendering it unable to perform object detection. Similarly, ODISE faces constraints as it cannot conduct referring segmentation and part segmentation tasks, while also demonstrating markedly inferior results in object detection and instance segmentation. A summary of these limitations is provided in Table 1 of the main paper. **[Open Vocabulary] More open vocabulary results are required to demonstrate the effectiveness of the method. Additionally using dataset such as O365 seems unfair when compared with previous works.** In addition to the comprehensive evaluation results across **40** datasets presented within both the main paper and appendix materials, we have provided additional open-vocabulary results in Table R1 (see rebuttal PDF). In direct comparison to other universal models trained within similar settings, our model achieves the state-of-the-art performance in ADE-full benchmark with over 800 classes. Compared with methods featuring segmentation-specific decoders (e.g. ODISE and X-Decoder), our model exhibits a distinctive capability – the capacity to leverage detection datasets such as Object365 for enhanced object localization. We consider this to be a strength rather than a limitation, as annotating bounding boxes is significantly simpler compared to annotating segmentation masks. In contrast to prior methods primarily centered on open-vocabulary semantic segmentation, such as OVSeg or SAN, our model showcases a distinct capability by performing a considerably broader range of tasks with only a modest increase in the overall training dataset. All these models (OVSeg and SAN) used large scale image-text dataset such as LAION,CLIP, Florence. (Additionally, it's important to acknowledge that such a comparison might exhibit a bias towards semantic segmentation methods, mainly due to their narrower focus and common utilization of COCO-Stuff dataset, which contains a greater number of stuff classes compared to COCO-Panoptic.) *We believe that the explanations provided above sufficiently address the primary concerns shared by the reviewers. We kindly invite you to review the individual comments for each reviewer, where you will find our responses to reviewers' specific inquiries. Your consideration is greatly appreciated. Thank you!* Pdf: /pdf/feabea72a1347656966c147c49bd4210998bc310.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bringing regularized optimal transport to lightspeed: a splitting method adapted for GPUs
Accept (poster)
Summary: This paper adapts the Douglas-Rachford splitting to solve a wide range of sparsely-regularized optimal transport problems efficiently using GPU-parallelizable operations. The contributions are as follows: 1) Adapt the Douglas-Rachford splitting to handle regularized OT problems with sparsity-inducing penalties 2) Prove global convergence of the method and prove an accelerated local linear rate once the support is identified 3) Provide GPU kernels for well-known regularizers (quadratic, group lasso) that achieve low-cost per iteration Strengths: The paper is well-written, the literature review is exhaustive and sets the context for this work: the relevant references are correctly cited. Besides, the subject tackled by the paper is critical for a wider adoption of OT to large-scale problems. The supplementary materials do a great job at making the experiments reproducible and explaining the low-level details of the kernel implementation. Weaknesses: The area of improvement for this paper remains the experiments section. For benchmarking purposes, I would use the Benchopt tool: https://github.com/benchopt/benchopt. Benchopt offers reproducible benchmarks and convergence curves over multiple runs. A figure from Benchopt to compare RDROT to L-BFGS would be more impactful than figure 2. Moreoever, the authors should favour real-world datasets instead of simulated cost matrices. The quadratic regularization and the Group Lasso sections should both display examples on real-world datasets to prove the robustness of the proposed method. ========= Besides, I list below a few typos I spotted in the paper: 1) l.49: Douglas-Rachford... splitting is missing. 2) l.93: $\lvert \lvert X \rvert \rvert_F = \sqrt{\langle X, X \rangle}$. 3) l.108: I would precise that $\lambda > 0$. 4) In equation 6, $f$ and $e$ are extracted from [23], without explicitly defining them. I would explicitly use $\mathbb{1}_n$ for $e$ and $\mathbb{1}_m$ for $f$. 5) l.165: "used" twice. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: l.143: "since $Y_k$ can be eliminated...": could you elaborate on what you mean? After multiple readings, this remains unclear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No negative societal impact for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and effort you invested in thoroughly reviewing our paper. Your feedback is very valuable to us, as it helps us improve our paper. We are grateful that you highlighted some typos in the manuscript - we will make sure to revise it according to your comments in upcoming versions. Regarding your question on why $Y_k$ can be eliminated, this comes from the fact that $Y_k$ can be substituted in the $X$-update of equation (5). This results in that $X_{k+1}$ only depends on $X_k$, $\phi_k$, and $\varphi_k$. Further, since the $\phi$-updates are only dependent on $X$, we don’t have to keep track of $Y$ anymore after doing this substitution. This is pivotal for the efficiency of the resulting algorithm since it enables us to only keep track of one large matrix instead of two (or even three). To improve the readability of the final manuscript, we will add some additional explanations here. Unfortunately, up to this day, there are no OT benchmarks included in BenchOpt that we can use. Of course, this also means that there is an opportunity for us to contribute to the BenchOpt effort. Specifically, it would’ve been nice to have customized a QuadOT benchmark and GL-OT benchmark for this paper, but due to time constraints of the rebuttal, we haven’t been able to prioritize this. We will look into adding BenchOpt experiments to our main repository, to make our results more convincing and reproducible, and reach out to the BenchOpt authors to investigate the possibility to have these benchmark problems merged with their efforts. We appreciate you sharing your input on improvements to the paper. We will update the manuscript according to the typos you highlighted in your review. Many thanks! The authors --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their answer. This is well noted for the BenchOpt benchmark. Keeping my grade unchanged as it is already positive.
Summary: The paper develops an approximation algorithm for solving the optimal transport (OT) problem with a general class of regularizers (named "sparsity promoting", but more precisely characterized by not penalizing sparse solutions). The Douglas-Rachford splitting algorithm is used, extending the previously introduced DROT algorithm to handle regularization. After introducing the OT problem and the Douglas-Rachford algorithm, the authors define a class of regularizers which include quadratic and group lasso regularization. With additional assumptions, the convergence rate of the algorithm is established via first a convergence to the correct support, and then linear convergence to the optimal solution. The authors state that the proposed algorithm converges to epsilon accuracy in 1/epsilon iterations, instead of 1/(epsilon^2) of previously known methods. A fast GPU implementation of the RDROT algorithm is briefly described, along with some other practical considerations. Two numerical experiments are performed on a) domain adaptation, and b) generative modeling. Strengths: The paper is well written overall, with a clear introduction to the problem, proposed approach and contributions. The practical considerations are also well written, and the GPU implementation -- as well as integration in popular programming frameworks -- makes the contribution particularly useful to the community. The experiments are clear and present a convincing case for RDROT. Weaknesses: The theoretical aspects could be made clearer: it is not evident how theorems 1 and 2 lead to a 1/epsilon convergence rate (as there is no rate for support identification). The relevance of assumption 1 to the OT problem should be expanded upon: if and where it has been used in the literature, and when it doesn't apply. In particular, a comparison to other convergence guarantees existing in the literature for Sinkhorn divergences (e.g. [1]) would be very useful in assessing the applicability of the proposed method. [1] Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration, Jason Altschuler, Jonathan Weed, Philippe Rigollet, 2017 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: If at all feasible, a comparison to more Sinkhorn-based algorithms (also in simpler settings, where convergence is assessed) could be useful to place the work within the literature. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: A discussion on the tradeoffs between the proposed algorithm and minimizing sinkhorn divergences would certainly help to correctly evaluate the proposed contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your questions and your feedback on our paper! We believe that we have been able to address all your concerns, as detailed below. We agree that there is room for improvement to make the theory section clearer. The 1/epsilon convergence rate of DR-splitting is nothing that we derive, but we simply refer to the work by He et al (2012) . This is a general result for DR splitting that is derived by leveraging the firm non-expansiveness of the proximal operators. The result only demands that the OT problem is convex and closed and that the problem has a solution. Hence, the 1/k convergence follows for all closed and convex regularizers that are proper over the convex polytope. Further, although the results in He et al (2012) are for ergodic sequences, some of the results can be generalized to non-ergodic sequences, see, e.g., He (2015). We will clarify this in the final version of our document. In addition to such existing results, we establish that our algorithm identifies the correct sparsity structure in a finite number of iterations and that stronger (typically linear) rates dominate once the correct sparsity structure is identified. This means that we not only have a global $1/k$ rate that is competitive to many alternative algorithms, but that we also, in many cases, have considerably stronger local rates. These stronger results are established under the additional Assumption 1. As discussed in Section A.2 in the Supplementary material, Assumption 1 holds for most OT problems besides some very specific edge cases. We have never encountered such edge cases in practice, and believe that they are rare. You are right that this assumption is typically not imposed in other OT papers. However, in the context of computational OT, to the best of our knowledge, no other work leverages sparsity to derive theoretical guarantees. Papers in other areas that do, such as Liang et al. (2015) , typically make this or related assumptions. We will clarify these points in the final version of the document. The reference you mention in the review (Altchuler et al. (2017)), but also Lin et al (2019), which we refer to in the paper, derive $1/\epsilon^2$ rates for Sinkhorn and Greenkhorn. These rates are significantly worse than the ergodic rates that hold for RDROT. However, an interesting aspect of their analysis is that they quantify the total variation errors (i.e. $\ell_1$-norm) of the marginals in terms of the problem size. We have not (yet) attempted to derive such results. Nevertheless, our numerical results strongly suggest that the numerical advantages of RDROT do not diminish as the problem size is increased (at least up to problems of the sizes that fit the memory of our GPU card). We hope our rebuttal will make you more confident that the community would benefit from learning about our contributions at the NeurIPS Conference 2023. Thank you! The authors **References** He B, Yuan X. On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM Journal on Numerical Analysis. 2012 He B, Yuan X. On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numerische Mathematik. 2015. Liang J, Fadili J, Peyré G, Luke R. Activity identification and local linear convergence of Douglas–Rachford/ADMM under partial smoothness. InScale Space and Variational Methods in Computer Vision: 5th International Conference. 2015. Altschuler J, Weed J, Rigollet P, Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. 31st Conference on Neural Information Processing Systems, 2017. Tianyi Lin, Nhat Ho, and Michael Jordan. On efficient optimal transport: An analysis of greedy and accelerated mirror descent algorithms. International Conference on Machine Learning, 2019. --- Rebuttal Comment 1.1: Comment: Having read the other reviews and the author's rebuttal, I am more confident of the strength of this submission. In particular I believe that while the experimental section may not be perfect, it is still worthy of publishing a work which improves efficiency by technical (i.e. implementation) means as well as algorithmically. I appreciate the effort of the authors in improving the clarity of the theory section. --- Reply to Comment 1.1.1: Comment: Dear Reviewer PSMB, Thank you for reading and reflecting on the other reviews and our rebuttal. We are very pleased to learn that you have increased your initial score - thank you! We are currently exploring a number of additional examples that we hope to add to our code repository (a reference to the repo will be added to the paper upon acceptance), and are looking into how we could leverage the BenchOpt infrastructure to simplify reproducibility of the experiments and comparison with alternatives. We hope that this will constitute a good complement to the experimental results already included in the paper. Sincerely, The Authors
Summary: The paper presents an extension of the Douglas-Rachford algorithm in [23] to regularised optimal transport. It describes conditions under which sparsity-inducing regularisations result in sparse solutions and convergence rates of the resulting estimates. Implementation on GPU, as well as gradient routines, are considered in detail. Some numerical experiments illustrate the performance of the method. Strengths: To the best of my knowledge, the introduction of convex regularisation as part of a DR algorithm is novel. It seems to bring several benefits, both computational (speed + stability) and theoretical (sparsity and convergence rate guarantees). The article is comprehensive in its treatment, considering parallelisation, gradient computation, and algorithmic designs such as step-size selection, etc. While I have not had time to go through the supplementary material in detail (I hope to be able to do so later on), the theoretical analysis looks sound and reasonable. Weaknesses: The experimental comparison with [4] is unfair: the implementation of the proposed method is fully tailored, while the choice for [4] is a generic and not optimised PyTorch implementation. To improve this, the same effort should be spent on developing a custom CUDA kernel for [4] as for RDROT or providing a generic PyTorch implementation for both, rather than optimising the proposed method only. **IF** it is impossible that [4] can be improved, this must be explicitly discussed and explained. Additionally, no comparison with Sinkhorn for entropy-regularised OT (and its accelerated versions, see for example http://angkor.univ-mlv.fr/~vialard/wwwsrc/AndersonMultiMarginal.html and https://github.com/ott-jax/ott/blob/main/src/ott/solvers/linear/acceleration.py) is given. Arguably, this is not sparsity-inducing, but (i) the title and introduction directly refer to [10], and (ii) the smoothness of the transport plan is sometimes a desirable property. Again, the comparison should be done with the same amount of implementation effort (either both custom-made kernels, or both high-level). While Section 3.3 discusses gradient computation, the supplementary is limited to gradient w.r.t. the cost function. In general, people tend to also be, if not more, interested in the gradient w.r.t. the marginals (weights and positions). Some discussion on this would be welcome. I also believe that (for at least the quadratic regularisation) the optimal transport plan is not differentiable w.r.t. the OT inputs. A quick discussion of this in the supplementary would be welcome. Finally, in terms of literature review, I am surprised that https://epubs.siam.org/doi/10.1137/130920058 (published in 2014) is not mentioned given that all the methods introduced in [23] (published in 2022) can, as far as I understand, be found in it. This does not impact the novelty of the current work (as I am not aware of a regularised version thereof, but proper attribution of the DR algorithm should go tohttps://epubs.siam.org/doi/10.1137/130920058 rather than [23]. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My main point of concern is the unfair empirical comparison. Because of this, I cannot trust the paper's title claim of "lightspeed", given that its performance may very well be implementation specific rather than methodological. I would be willing to substantially strengthen my rating if this was solved. See also the other (more minor and easy to fix) weaknesses. #### Minor points: - The notation for the prox operator $\mathrm{prox}_{\rho f}$ is undefined in the paper. - $e$ and $f$ in Equation (6) are undefined. Please replace with $\mathbf{1}_{m/n}$ as necessary. - Typos: I did not spot any in the main text, but spotted two in the supplementary (assumpion, opertion), there may be other ones. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the time you invested in reviewing our paper. Your feedback means a lot to us - thank you! We have rerun our experiments with a PyTorch version of RDROT. Naturally, this results in a performance drop for our algorithm, but the experiments still show that our algorithm is faster than the state-of-the-art. Please see Figures 1 and Figure 2 in the authors' rebuttal. With that said, since the main computational bottleneck in RDROT is the update $X \leftarrow X -\rho C$, it is difficult to fully utilize the GPU parallelization and efficient memory management without tailoring a kernel (that is, by using working blocks and warp reductions). Sinkhorn-based methods are different since the memory-intensive operations are matrix-vector multiplications, which libraries such as PyTorch and TensorFlow handle efficiently. If you think commenting on this will improve the paper, we add a short comment in the final version of the paper. “Lightspeed computations” in this context refers to Cuturi’s seminal paper on Sinkhorn’s algorithm for OT. Therefore, when we assert that we bring regularized OT to lightspeed, we mean that we manage to develop an algorithm for regularized OT that has similar, or better performance than Sinkhorn (on the entropically regularized OT problem). As Figure 2 in the Supplementary material, and Figure 2 in the PDF attached to the Author rebuttal suggest, the per iteration cost of our method is similar to that of Sinkhorn. This together with the improved iteration complexity, in our view, suffice to say that our approach estimate transportation plans with lightspeed computations. If you believe that this would improve our paper, then we could also include explicit figures of the wall-clock execution times of the two methods. With this understanding, techniques that accelerate the practical converge of SK execute “faster than light speed”. Our focus in this paper is really on regularized OT problems, and not on the pure OT problem that Mai et al considered, nor on multimarginal problems that is done in the post that you shared. For regularized problems, we are already able to attain significant speed-ups relative to the state of the art. It is an interesting idea to apply Anderson Acceleration also to the RDROT iterations, but it would be memory intense and probably difficult to scale even to medium-size OT problems. To differentiate with respect to the marginals, one can use a similar argument discussed in the paper, but on the dual problem. We elaborate on this in the Author's Rebuttal. To our knowledge, most sparsity-promoting regularizers will result in transportation plans that are differentiable at least almost everywhere - but the resulting Jacobians will be zero. Developing ways of working around this is an active area of research, e.g. Sahoo et al. 2023, and it would be interesting to explore how this further can be applied to this framework. We appreciate you sharing the typos you’ve found in our manuscript. We will revise according to your comments when polishing the final version. Thanks a lot! The authors **References** Sahoo SS, Paulus A, Vlastelica M, Musil V, Kuleshov V, Martius G. Backpropagation through combinatorial algorithms: Identity with projection works, International Conference on Learning Representations 2023 --- Rebuttal Comment 1.1: Title: Acknowledgement of the rebuttal Comment: I thank the authors for this very clear and detailed rebuttal. I will, as I planned, raise my score, on grounds of improved soundness. However, this point > Finally, in terms of literature review, I am surprised that https://epubs.siam.org/doi/10.1137/130920058 (published in 2014) is not mentioned given that all the methods introduced in [23] (published in 2022) can, as far as I understand, be found in it was not addressed by the reviewers. I would like to insist that the original attribution of the DROT methodology should go to it rather than [23]. On a side note, I have not found time to thoroughly check the proof for the sparsity-inducing properties, but have skimmed through it once more and still found it sound at a superficial level. I however can't raise my confidence score because of this. --- Reply to Comment 1.1.1: Comment: Dear Reviewer e6yJ, we are pleased to learn that you are satisfied with our rebuttal and increased your score to "Accept". We apologize that our rebuttal forgot to discuss the missing citations in our first draft. We will (and always intended to) add them, in particular https://epubs.siam.org/doi/10.1137/130920058, to the final version of the paper. Sincerely, The Authors
Summary: In previous work [23] (DROT), the Douglas-Rachford splitting was applied to solving unregularized optimal transport (OT). The idea is to split the original variable into two variables $X$ and $Z$, where $X\ge 0$ and $Z$ prescribed row and column sums. The current paper extends this idea to regularized OT. Compared to previous work [23], there are some novelties such as the local linear rate of convergence (Section 3.1, based on the results of Liang et al. [19]) and differentiation (Section 3.3). Strengths: The paper is well written in general. Compared to DROT [23], this paper provides a local linear rate of convergence and a differentiation result. I particularly like the latter, it's a nice result and is useful in deep learning applications. The obtained algorithm with its GPU implementation is very efficient and has the potential to replace existing algorithms in many applications (though I have to say that this is rather an encouragement, because the same thing could have been said about DROT [23], yet after more than two years since its publication, it hasn't been adopted in any research paper (3 citations at the time of writing, including one from the current submission), I wonder why this is the case. Weaknesses: The main limitation of this paper lies in its algorithmic contributions: there is virtually none. Unfortunately this is a major issue. The extension of DROT to the regularized case is rather straightforward. This is a very nice extension of DROT, but it alone is a rather weak contribution. Some presentation issues: - You should cite Sinkhorn and Knopp [31] when first mentioning the Sinkhorn algorithm in the introduction. Curuti's work [20] should be cited only for its efficient implementation. - You should cite Bauschke et al. 2021 (Projecting onto rectangular matrices with prescribed row and column sums) at line 138 when mentioning the projection onto X. It seems to me that this projection is the heart of both DROT and your algorithm. - Regarding the definition 2.1 of "Sparsity promoting regularizers" on page 3: according to this definition, it seems that the entropic regularizer is also a sparsity promoting one. However, the current discussion seems to indicate that the entropic regularizer is not suited for sparsity. Thus more clarifications and discussions are needed here (or maybe just choose another name for the definition). - I think the details on the stopping conditions and backpropagation are important and should be presented in the main content rather than in the appendix. Minor typos at line 93: <X,X> instead o <X,Y> in the definition of the norm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What happens if we use the entropic regularizer in RDROT? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive feedback. We are grateful for your positive words about our work but, quite naturally, disagree with the statement that the algorithmic contributions are limited, or that this should be reason enough to reject the paper. Let us elaborate. We believe that an important message in the work by Mai et al. 2022 was to highlight that for most existing algorithms, the unregularized OT problem is memory bound. In other words, even though there are many algorithms for solving the OT class of linear programs with an excellent iteration complexity, they are not amendable to efficient implementations beyond toy examples. In the same spirit, it is our opinion that the main contribution by Mai et al. was not to simply apply DR splitting to OT but rather to do it in a way that enables a memory-efficient GPU implementation. A limitation of the work by Mai et al. 2022 is that they focused on the unregularized OT problem. Although they reported crisp transport plans with significantly reduced blur compared to SK solutions, such high-accuracy solutions may not be needed in scenarios where the underlying data is noisy. This limits the practical usefulness of the DROT algorithm. The main contribution of this work has been to identify a broad class of regularizers that can be dealt with efficiently on GPUs. Such problems have many applications but, quite surprisingly, few (if any) efficient algorithms. On the contrary, the algorithm that we propose is surprisingly simple, yet amazingly efficient. We believe that there are very few NeurIPS contributions that can, as we do, present an algorithm that runs 100x faster than the state-of-the-art on a well-established problem class. Indeed, one may argue that such an achievement alone should make the paper very interesting to a significant part of our community, and therefore qualify it for publication. To the best of our knowledge, the class of sparsity promoting regularizers is novel in the context of optimal transport, and its usefulness has not been pointed out in the literature before. As you, and the other reviewers, point out, these are not the only contributions of the paper. We also do provide global and local theoretical guarantees that apply to the sparsity-promoting regularizers. Further, we show in our experiments that our framework is readily applicable to a range of problems, including domain adaptation and learning of generative models. To facilitate for the practitioner, we also wrapped RDROT in the autograd frameworks PyTorch and TensorFlow. We believe there has been a minor misunderstanding regarding the question on entropic regularization. Entropic regularization: $H(X) = \sum_{ij} h(X_{ij})$ with $h(x) = x \log x$, is not sparsity-promoting, since it is not closed. By extending its domain so that$h(0) = 0$, it becomes closed, but still not sparsity promoting as $h(1/e) = -1/e < h(0)$. Therefore, our framework, in its current form, cannot handle this particular regularizer. However, if entropy-regularized OT is of interest, Sinkhorn-based approaches do an excellent job estimating optimal transportation plans. What we are trying to accomplish in our work is to go beyond this regularization scheme. One interesting direction to explore further is using hyperentropic regularization to interpolate between quadratic regularization and entropic regularization, which can be handled by our method. Further, we think your input on the presentation issues are fair - we will add the citations you recommended and fix the typo. We considered your suggestion to move the stopping criterion from the appendix, but we found that the current organization of the manuscript gives a better flow of ideas, and is more readable. We hope that our answers will make you reconsider your scores. Thanks! The authors **References** Vien V Mai, Jacob Lindbäck, and Mikael Johansson. A fast and accurate splitting method for optimal transport: Analysis and implementation. International Conference on Learning Representations, 2022 --- Rebuttal 2: Comment: Dear WrJb: can you read the authors' response, and see if your comments are addressed?
Rebuttal 1: Rebuttal: Dear AC and reviewers, Thank you for the time you invested in the peer-reviewing process. The input you provided has both been insightful and inspirational for us. The reviewers have recognized many strengths and novelties of our contributions, including: - the generality of the algorithm for regularized OT (it is readily applicable to many regularization schemes and constraints). We achieve this without compromising the numerical performance or the theoretical guarantees (e6yJ, PMSB, 4HzB). - by using a tailored GPU kernel, our algorithm leads to ~100 x speedup compared to the state-of-the-art for several problems (WrJb, e6yJ, PMSB, 4HzB). - we establish that a global competitive 1/k rate that holds for all considered regularizers, and a local linear rate under additional, weak assumptions (WrJb, e6yJ, PMSB). For clarity, when we state that we bring regularized OT to lightspeed, what we mean is that RDROT has computational advantages comparable with Cuturi’s Sinkhorn method. We provide theoretical, numerical, and computational justifications for this claim. Our method enjoys rates that are better than that of the state-of-the-art, its per-iteration cost is low, and it benefits from GPU parallelization. Since our algorithm depends on operations that are difficult to parallelize in high-level frameworks such as PyTorch, to fully utilize the computational potential of the GPU, we’ve developed a GPU kernel for our framework. During this rebuttal, however, we found that an RDROT version implemented in PyTorch is still competitive for many problems. We illustrate this in Figure 1 in the attached PDF, in which we compare two versions of QDROT (RDROT applied to quadratically regularized OT), to an LBFGS method. Although the torch version is almost 10 times slower, it is still significantly faster than the LBFGS method, which is among the more popular algorithms for this application. To enable practitioners to use our frameworks for deep learning, we developed a PyTorch and a TensorFlow wrapper that feature automatic differentiation through regularized OT costs. Some reviewers showed interest in this, including WrJb and e6yJ, which we are very pleased about. However, we stress that there are many possible extensions here. For instance, to differentiate with respect to the marginals, one can use the estimated dual variables $\phi/\rho$, and $\varphi/\rho$ as gradient approximations (see PDF for more details). To differentiate with respect to the transportation plan itself, one must use different techniques since the derivative is typically zero almost everywhere. We believe that the techniques proposed by Sahoo et al. (2023) could be worth exploring in this framework. The feedback provided by the reviewers will contribute towards an even stronger final manuscript. The questions and criticisms raised by the reviewers have been taken into careful consideration - we are confident that we addressed all major concerns and the vast majority of the more minor suggestions. We believe our paper - which has been improved further by this reviewing process - would be of great value for the community, and hope that you agree that it is worthy of being presented at the conference in December. Thank you! The authors **References** Sahoo SS, Paulus A, Vlastelica M, Musil V, Kuleshov V, Martius G. Backpropagation through combinatorial algorithms: Identity with projection works, International Conference on Learning Representations 2023 Pdf: /pdf/a9ecc172cf87973761a341e863d29e0cc0796a87.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Paxion: Patching Action Knowledge in Video-Language Foundation Models
Accept (spotlight)
Summary: The paper tackles the known issue of CLIP-like models acting similar to bag-of-words models, in which structured information (e.g. relations between objects) is not really captured, so for action recognition the information leveraged is mostly object and scene (e.g. a person besides a guitar vs a person playing a guitar). The authors propose to use some extra augmentations related to action recognition information (both on the visual side and the language side), and also add an adaptor (a perceiver/Q-former) that becomes trainable using the extra augmentations. The underlying L&V model is kept frozen, so the training is efficient and the underlying model is not degraded for other domains. Strengths: - clear issue, sensible yet innovative solution, good results, all in a well-written paper. Weaknesses: - Right now there is little information in the main text that points to the "patching data", only the appendix has this info. I think this should be made very clear in the text and in the tables. One imagines that there's domain-specific fine-tuning, but it is not a must, it could be that the calibration set comes from a mixture of sources so the model has general applicability off-the-shelf. That the setting is the former is important. Does this make sense? - The other implication is: this assumes some CLIP-like model for video, and then uses the "knowledge patching" trained on downstream on-domain data. But one could equally decide to go the other way and start with the image CLIP, and train an adaptor to make it work on video using downstream data. This is the approach for example of ST-adapter and AIM. These comparisons are not included in the paper. (Not important but maybe interesting) The paper (text augmentations) reminded me of the following paper: "CVPR'23, Teaching Structured Vision & Language Concepts to Vision & Language Models", which is not discussed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The main question is how this method compares against CLIP + ST-adapter or a similar strategy to adapting CLIP to video through lightweight adapters. Open to hearing the author's opinion about the first point (the setting) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No. Authors could comment on the setting (need to have downstream training data), or the standard issues with vision + language pre-training for example. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer BUR1 for the constructive comments. We are glad that you find our paper to be innovative and well-written. We will address your comments and questions in the following paragraphs. ### Little information about the “patching data” We appreciate the reviewer’s suggestion to include more details about the setting and the construction of the patching data. In the next version, we will clarify the following points in the main text: 1. The construction of the patching dataset (action antonyms and reversed videos, detailed in Appendix B) is fully automatic, meaning that the users can create their own patching datasets easily. 2. As shown in Table 7, the patching dataset and the fine-tuning dataset are not necessarily the same (i.e., patching with SSv2-label but fine-tuning on SSv2-template). Hence, as mentioned by the reviewer, it is possible to create a mixture of patching datasets for the model to learn general action knowledge and then apply it to a new task in a zero-shot manner. The zero-shot cross-domain transfer result in Appendix A demonstrates some initial promising results regarding this generalization ability, but we still have a large room for improvements. ### Comparison with CLIP-adapted models We agree with the reviewer that a promising line of work for solving video-language tasks is to initialize and adapt from strong image-language models, such as CLIP. In fact, we also included a recent representative model along this line as one of our backbone, CLIP-ViP. We want to reemphasize that the primary motivation of Paxion is to patch missing action knowledge into existing frozen foundation models, including those CLIP-adapted models. As depicted in Table 1, the original CLIP-ViP still demonstrates nearly random performance on our ActionBench, while Paxion significantly further enhances its action understanding. We thank the reviewer for the suggestion for adding comparison with CLIP+ST-adapter setup and will investigate that in the next version. ### Comparison with the "CVPR'23" paper We appreciate the reviewer's recommendation to compare our work with the paper "CVPR'23, Teaching Structured Vision & Language Concepts to Vision & Language Models." In the next version, we will incorporate the following discussion in the related work section. Although the high-level idea is related, especially concerning negative instance generation, we identify the following major differences: 1. They focus on object attributes and relations in static images, whereas we focus on actions and state-changes in dynamic videos. 2. They adopt a LoRA-based PEFT method, while we propose a novel Perceiver-based patch-and-fuse framework, enabling the backbone to be fully frozen. --- Rebuttal Comment 1.1: Title: response to authors Comment: I have read the other reviews and all the author responses. I don't see much to change my original appraisal either in the reviews and the author responses. Regarding the specific reply to my comments: - I would appreciate if the patching data is more explicitly defined. I understood what the authors reply, but was just pointing out that this info is important and it is quite buried. - There is no need to compare against the cvpr paper, nor argue why it is different. I just pointed out a reference that seems relevant and maybe useful. I have nothing to do with the authors of that paper. I'll be maintaining my score, which is already positive. --- Reply to Comment 1.1.1: Comment: Thank you for your tremendous effort in reviewing and providing valuable comments! They are very helpful for enhancing the submission.
Summary: In this paper, the authors introduce an interesting ActionBench, aiming to handle action antonym and video reversal problems. To remedy the problem in well-trained VidLM, the authors propose the DVDM objective, along with knowledge patcher and fuser. Extensive experiments demonstrate the effectiveness of the novel objective and modules. Strengths: - Novel and internting ActionBench. - Simple modified contrastive objective to enhance the VidLM abilities for recognizing action antonym and video reversal problems. - Extensive ablation studies show the effectiveness. Overall, I appreciate the paper's idea to handle the interesting ActionBench. Though it has been somehow proposed in SthSth, there isn't any work trying to handle the problem. The authors propose the smart fine-tuning method with the perceiver as a knowledge patcher and fuser. And they design a modified contrastive loss for fine-tuning VidLM. Weaknesses: In fact, the VAC and ATM losses are modified contrastive losses with different positive and negative examples. However, the authors give a complicated motivation in Section 3.1 for their DVDM, which makes the paper harder to read. I suggest to present their idea in more easy-to-follow way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer CJDm for the constructive comments. We appreciate that you find our ActionBench to be novel and interesting. We will address your comments in the following paragraph. ### Complicated motivation for the DVDM objectives We appreciate the reviewer's suggestion, and will revise section 3.1 to provide a clearer presentation of this motivation. The reason we motivate DVDM from the Markov Decision Process (MDP) is to give a more **generalized formulation** of the new **video dynamic modeling objective**. Due to the unique challenges mentioned in line 171, we leveraged a “relaxed” formulation of the dynamic modeling and integrated it into a contrastive learning framework as VAC and ATM losses. However, in scenarios with cleaner training data or within a fully controlled environment (e.g., simulator), it is definitely feasible to explore a generative version of the video dynamic modeling objective in future research. Our current "discriminative" version serves as a straightforward yet effective initial step towards fine-grained modeling of visual changes in videos. --- Rebuttal Comment 1.1: Title: response to authors Comment: I really appreciate the interesting idea of ActionBench, thus I keep positive for the paper. I hope the author can give a more straightforward introduction for others to follow. --- Reply to Comment 1.1.1: Comment: Thank you so much for your positive feedback and your great effort in reviewing! We appreciate your suggestions on the paper writing!
Summary: The manuscript presents an Action Dynamics Benchmark (ActionBench) with three new evaluation metrics for existing video-language models. Through ActionBench, the authors find that existing video-language models essentially rely on recognizing objects to recognize actions. Hence, a parameter-efficient component named Knowledge Patcher is connected in parallel to the video-language model and is trained with Discriminative Video Dynamics Modeling to improve the action understanding ability of the model. Finally, the authors present a knowledge fuser to infuse the knowledge learned by the Knowledge Patcher into the video-language model for downstream tasks. Empirical results show that training a knowledge patcher for use is effective in improving the video-text retrieval and temporal VQA tasks. Strengths: - Although it is proposed before that current video models may rely on spatial information to recognize actions in videos, the manuscript is the first to present such a benchmark with proper evaluation metrics to show the problem. This can inspire further work to solve the problem in video understanding. - The Discriminative Video Dynamics Modeling is straightforward and effective in training the video-language model to be aware of the action and the temporal direction of the videos. - The approach that the manuscript presents is similar to post-pre-training, and it is shown in Table 2 and Table 3 that such a post-pre-training strategy can benefit downstream retrieval tasks. - The writing is good and the organization of the manuscript is clear. Weaknesses: - The evaluation in the downstream task is limited to video-text retrieval on SSV2 and VQA on NExT-QA, which makes it difficult to compare the presented approach with existing methods. Since the evaluation is based on InternVideo backbone, it is possible to provide some other comparisons where InternVideo has some published results. - Further on the above note, I am not quite familiar with VQA but on SSV2, the performance seems oddly low. Especially for video-to-text retrieval, which is essentially a text-driven classification for SSv2 videos, the performance has reached 61% for EVL-B/16 [1] when 8 frames are used, which also connect a side network to the pre-trained encoder. I didn't find details in both the manuscript and the supplemental material why this is the case. Could you provide an explanation for this? [1] Frozen CLIP Models are Efficient Video Learners Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - To help object-centric understanding, why don't you use the pre-trained representations from the backbone as well (e.g., concatenate or add them to the KP during fine-tuning)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer SP6c for the constructive comments. We are glad that you find our benchmark to be inspiring and our paper to be well-written. In the following paragraphs, we will address your comments and questions. ### Limited downstream tasks for evaluation We completely agree that evaluating more downstream tasks would be ideal. However, as mentioned in the Introduction, many popular video-language downstream tasks still suffer from strong single-frame bias, which does not faithfully reflect a model's understanding of action knowledge. This is precisely why we propose ActionBench; it serves as an initial step in building better Vid-L benchmarks that require true **video** understanding. We will investigate extending the probing tasks to connect with more real-world downstream applications in future work. ### Low performance on SSv2 We thank the reviewer for pointing out this interesting observation. Here are several key differences between our setting and previous work. We will make these points clearer and do more investigating in the next version: 1. In Paxion, as discussed in Sec 3 and Table 1, both the Knowledge Patcher and the Fuser are much more lightweight (one Perceiver layer; consisting of only 0.9% parameters) compared to the side module in [1]. This may result in limited expressiveness. Nevertheless, we acknowledge that further work on scaling would be beneficial. 2. One of the fine-tuning objectives (DVDM) in Paxion emphasizes action understanding, while the video-to-text retrieval (SSv2-label) is more object-centric. Figure 6 shows that DVDM significantly benefits more action-centric and temporal-heavy tasks, such as video-to-action retrieval tasks. 3. The backbone model might favor certain datasets. Although InternVideo demonstrates generally better zero-shot video retrieval performance than CLIP on many datasets (e.g., MSR-VTT, MSVD, etc.), there is no result on zero-shot SSv2 in the paper. This makes me wonder if CLIP is favored on SSv2 compared to InternVideo. We thank the reviewer for the insight and will include more controlled experiments to investigate this. 4. Currently, we only fine-tuned on SSv2-label for one epoch for efficiency, as our main goal is to demonstrate the impact of the proposed DVDM objective versus VTC. We will investigate fully fine-tuning until convergence in the next revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I would encourage the authors to provide some additional results on some popular existing benchmarks to demonstrate a clearer position of the approach in comparison with the existing approaches (ok to include in the supplemental materials too). As for SSv2, I look forward to the full experiment results in the next version. Overall, I keep my positive rating of the manuscript. --- Reply to Comment 1.1.1: Comment: Thank you so much for your revision suggestions and your tremendous effort in reviewing! We are glad to hear that you maintain a positive rating of the submission!
Summary: This paper addresses the task of improving video-language understanding models, which a particular emphasis on their ability to align described actions to video segments and also their ability to model temporal dynamics. The authors first propose ActionBench, which is a modification of two datasets (Ego4D, SS-v2) for probing the degree to which action antonyms and reversed videos impact the models, and show that frozen VidLMs do not perform these probing tasks well. The authors then propose Paxion + a DVDM training objective: Paxion has a knowledge patcher network (better action encoding) and a knowledge fuser (incorporating into frozen VidLMs), and DVDM extends the standard contrastive VTC loss to better correlate action text with the specific ordering of the video frames. The authors then benchmark their approach on downstream datasets and tasks (SS-v2, NExT-QA, etc.) Strengths: `+` Action/temporal understanding in large-scale pretrained video-language models is an important topic. `+` ActionBench contains probing tasks that seem like they may be interesting for future investigations, and the authors plan to release data and code for reproducibility. `+` The proposed Paxion/DVDM improvements seems to make a difference for the probing tasks relative to the base frozen VidLM, and there are ablations to show this. `+` Downstream evaluations on both SS-v2 and NExT-QA datasets to characterize the proposed model. Weaknesses: `-` There seems to be a limitation of how much the proposed techniques/tasks make a difference in the final downstream setting (e.g., side-tuning seems to ~match Paxion, within 0.3 points). This calls into question how over-specific the design of the proposed techniques and objectives are to the particular definitions of the proposed probing tasks in ActionBench (and by extension, how useful those formulations of the probing tasks are in downstream settings beyond SS-v2). (e.g., in [36] there is stronger impact on a larger range of downstream video distributions/tasks) `-` The analysis for NExT-QA also seems incomplete. Prior/concurrent work (e.g., [5, 36]) report more detailed breakdowns of the accuracy into the specific causal/temporal splits of NExT-QA (are the improvements coming in the right categories?). Further, these other works report accuracy on a harder subset (ATP-hard) that makes the improvements from their verb/temporal augmentation techniques clearer. There is significant space in the current paper (Table 3) for these numbers (and they do not require additional compute, since they are all subsets), so it is unclear why they are not reported especially since they may help to bolster the core claims of the work. `-` The probing tasks seem closely related to ones proposed in prior work [5], but there doesn't seem to be a full discussion / comparison in this work of what these tasks add to those other ones (there is a brief citation in related work, but no discussion of this kind). Having a clearer sense of the potential complementarity would help to better clarify the contribution of this work (relatedly, how text/visual biases are controlled for in the probing tasks proposed here). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses section above for questions / areas to address in the rebuttal. Because the preliminary rating is borderline, it will be very helpful to finalize/potentially improve the rating. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors provide a brief discussion of the limitation on focusing on one type of knowledge patch in the conclusion + dataset cards in the supplement. --- --- **Post-rebuttal update:** The authors partially address the weaknesses/comments described in the initial review: `+` The additional analysis on the NExT-QA results helps to better support their method/claims (compared to the original result, which showed minor differences), and we can see a larger gap on some settings for temporal/causal/etc where it is helping now. `+` The additional discussion with related work provides helpful context of how these works can be viewed as complementary to one another. `-` The limitations around the impact to downstream tasks, however, still remain. The authors make the argument that there are not a lot of temporal datasets, which is understandable. However: 1. Given one of the highlighted contributions of the work is proposing a new *benchmark* for probing (ActionBench), it would have been much stronger to make a connection to at least a couple other datasets that represent a spectrum of temporal/action understanding. Does ActionBench performance help to differentiate downstream settings where temporal/action understanding correlates well? Why or why not? (This is something that is illustrated by the proxy tasks in [5] well, in Table 5 of their work; it would have been interesting to see if this benchmark helps to complement/refine that other analysis). 2. Furthermore, the fact that many datasets are not temporal does not mean there aren't *any* additional datasets (or subsets of datasets) for event reasoning that could have been interesting to consider (e.g., [5] considers AGQAv2; there are others like STAR as well; [36] identifies an action-focused split of Kinetics). 3. Finally, it's worth noting the current work considers *two* evaluation datasets (SSv2 and NExT-QA), and SSv2 is also the basis for the original probing task (so that leaves only one fully new dataset to make a connection to). In contrast, related and concurrent work consider *many more* to help establish the broad efficacy of their proposed learning objectives, techniques, and analysis tasks (e.g., [5] considers 5+ downstream, [36] considers at least 4). In general, I don't think it is strictly necessary for papers to benchmark on tons of datasets, but given that a core contribution of the work is centered around a proposed probing task (and corresponding remedy), establishing this connection with at least one other dataset would have significantly improved the contribution of the work. `-` The author response that "the way we construct the probing datasets...is fully automatic, meaning that it is not confined to a particular dataset such as SSv2" is not necessarily persuasive, since this pipeline is tied to the specific (templated) language distribution of SSv2, and more importantly, some of the tasks may not make sense on other video distributions besides SSv2 (e.g., reversing the frame order may only make sense on SSv2's short videos of gestures/object interactions, where the reversed video can map to a realistic new video ("push object" $\leftrightarrow$ "pull object"), whereas if you show a model a video of people walking backwards or "uncooking" a pancake, it isn't quite as effective as a probing task since that's not a realistic new action). --- **Overall (post-rebuttal):** I think the rebuttal response did help to address some key concerns, and I do think there is value in considering this work in the broader context of other related efforts [5, 36], so I am increasing my rating leaning towards acceptance. I hope the points mentioned above regarding continued areas of improvement will be helpful for future revisions and/or later work. --- Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer ou3W for the detailed and constructive comments. We appreciate your acknowledgment of the value of our proposed probing tasks and the DVDM objective. We will address your comments and questions in the following paragraphs. ### Limited impact on downstream tasks 1. First, we would like to clarify that the Side-Tuning row is not entirely a standalone baseline since it also incorporates our proposed VTC+DVDM objectives. The motivation for comparing with Side-Tuning is to demonstrate that the Patcher-Fuser framework in Paxion can perform competitively, or even better, than current PEFT frameworks. When comparing Paxion with the baselines that use only the VTC objective in Table 2 and 3, we do observe a significant improvement in Paxion's performance on downstream tasks, highlighting the impact of the DVDM objective. *(Please also see the updated Table 3 below.)* 2. As detailed in Appendix B, the way we construct the probing datasets (i.e., the action antonym, video reversal and object replacement) is fully automatic, meaning that it is not confined to a particular dataset such as SSv2. 3. We totally agree that it is ideal to show impact on more real-world downstream tasks. Unfortunately, as mentioned in the Introduction, many popular video-language downstream tasks still suffer from strong single-frame bias, underscoring the need for better Vid-L benchmarks. Our ActionBench serves as an initial step in this direction, and we will further explore establishing stronger connections to real-world downstream applications. ### Detailed analysis of the Next-QA results We totally agree with the reviewer's suggestion, as it can bolster the claims made in the paper. We already updated Table 3 to include a comprehensive breakdown of the results (including three types: Causal(C), Temporal(T), and Descriptive(D)), and the results on the ATP-hard split. Since the policy does not allow submitting revision, we include the **updated Table 3** in markdown below: | Method | NExT-QA | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | **Original** | | | | | **ATP-hard**| | | C | T | D | all | | C | T | all | | InternVideo Backbone | 43.3 | 38.6 | 52.5 | 43.2 | | 27.0 | 27.3 | 27.1 | | KP-Transformer FT [VTC] | 46.1 | 45.0 | 61.3 | 48.1 | | 32.5 | 33.6 | 33.0 | | KP-Perceiver FT [VTC] | 46.0 | 46.0 | 58.9 | 48.0 | | 30.1 | 31.6 | 30.7 | | Side-Tuning [VTC+DVDM] | 54.9 | 52.0 | **69.8** | 56.3 | | 37.4 | 36.0 | 36.8 | | **Paxion** [VTC+DVDM] | **56.0** | **53.0** | 68.5 | **57.0** | | **38.8** | **38.1** | **38.5** | *Table 3: Causal-Temporal VQA (NExT-QA) results (in accuracy %) on the validation set. We consider both the original and ATP-hard~[1] split. We report accuracy for 'all' questions or specific types of questions, including causal ('C'), temporal ('T'), and descriptive ('D') questions.* As the reviewer expected, the decomposed results show that Paxion helps more on the Causal (‘C’) and Temporal (‘T’) types of questions, and achieves a more significant improvement on the harder subset (ATP-hard) where the temporal and action knowledge is emphasized. ### Detailed comparison with the prior work “Test of Time” [5] We identify the following key differences/contribution of our work compared with [5]. We will enrich line 281-283 to make these comparisons more explicit. 1. While [5] primarily focuses on understanding the temporal ordering between events, our work focuses on general action knowledge, encompassing both causal and temporal understanding of actions. 2. We propose very different probing tasks. [5] did not investigate verb or object replacement. The time-order reversal in [5] is reversing the appearing order of two video segments while the Video Reversal task in our ActionBench is reversing the frames of a single video. 3. Our proposed Paxion framework enables fast action knowledge patching on frozen VL backbones, while [5] requires post-pretraining the backbone model. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal response! I plan to update my final rating/review during the reviewer-AC discussion period, but in the meantime, I want to confirm that this is overall helpful towards some of the key concerns raised in the initial review, and that I don't think I have any further follow-up questions for the authors at this time. In particular, I'm quite glad that the updated NExT-QA subset analysis better contextualizes/confirms that the gains are coming in the most relevant settings (e.g., larger deltas on C/T/hard), and the additional discussion with [5] is also helpful to see. I'm still reflecting over more on some of the other rebuttal arguments on continued limitations/impacts -- at the same time, perhaps it's ok to leave some of this for future work as long as the claims in the paper are well-scoped. --- Reply to Comment 1.1.1: Comment: Thank you so much for your tremendous effort in the reviewing process and your engagement in the discussion. We are glad to see that our response has addressed most of your questions. We really appreciate your insightful suggestions which have made the result analysis much more solid and interesting.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adaptive SGD with Polyak stepsize and Line-search: Robust Convergence and Variance Reduction
Accept (poster)
Summary: This paper presents two step-sizes, AdaSPS and AdaSLS, and theoretical analyses of PSGD with AdaSPS/AdaSLS. It also presents numerical results to support the analyses. The contribution of the paper is to provide AdaSPS and AdaSLS step-sizes. Strengths: The strength of the paper is to provide two step-sizes, AdaSPS and AdaSLS, and to apply them to PSGD. In practice, the modifications, AdaSVRPS and AdaSVRLS, accelerate the existing methods. Weaknesses: - This paper considers only convex optimization, although optimization problems in deep learning are nonconvex. - There seem to be some doubts in Lemmas and Theorems. I would like to have solutions for the doubts (Please see Questions). - Numerical results are insufficient since it does not consider optimization problems in deep neural networks. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - AdaSLS and Lemma 17 (AdaSPS_main+appendix.pdf): Using the Armijo condition (4) in AdaSLS is interesting. Here, I check Lemma 17 (AdaSPS_main+appendix.pdf). However, I doubt that there exists a lower bound $(1-\rho)/L$ of the step-size $\gamma_t$ satisfying (4). In fact, we can provide counter-examples such that the step-size $\gamma_t$ satisfying (4) does not have any lower bounds. Consider $f(x) = x^2$ and apply it to the Armijo condition (4). Then, we can set a sufficiently small $\gamma_t$. - Armijo condition (4): Related to the above comment, Figure 5 in [32] indicates that there is a possibility such that $\gamma_t$ satisfying (4) converges to 0. Hence, we have counter-examples of Lemma 17 such that there does not exist a lower bound of the step-size $\gamma_t$ satisfying (4). - AdaSPS: AdaSPS uses $\ell_{i_t}^\star$. It would be unrealistic in deep learning. Can the authors provide some practical examples of $\ell_{i_t}^\star$ and $\eta_t$ defined by AdaSPS in deep learning? - Assumptions: Can the authors provide some examples of satisfying the interpolation condition? - Convex optimization: Since optimization problems in deep learning are nonconvex, considering only convex optimization is limited in machine learning society. Can the authors rebut this? - Can the authors provide numerical results for training DNNs on the benchmark datasets (CIFAR-100 and ImageNet)? Using only LIVSBM datasets is insufficient to show usefulness of the proposed algorithms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the remarks and criticisms. We are a bit puzzled by your score. We provide clear answers to your questions below. Although we do not claim any contribution to the deep learning scenarios, we do provide DL experiments with our practical version of AdaSPS. Please see Appendix G for more details. In Figure G.1, we give results of multi-class classification tasks with CIFAR10 and CIFAR100 using ResNet34 with softmax loss. We compared our stepsize against the popular algorithms including Adam, SGD-M, AdaGrad, and SPS, **among which our proposed AdaSPS gives the best performance**. We next provide answers to your questions. > Q1. line-search: lower bound We kindly disagree. The lower bound should be $\min(\frac{1-\rho}{L},\gamma_{\max})$. We guess you missed the $\min$ operator. (See also Lemma 1 in your mentioned paper). We also provide concrete proof in Lemma 17 in the appendix. We next answer your second question on why the stepsize in Figure 5 may diminish. 1. The implementation of SLS shown in Figure 5 uses dynamic initialization of $\gamma_{\max}$ to reduce the algorithm's running time, that is, setting $\gamma_{\max t}=\gamma_{t-1}\theta^{1/n}$ where a common choice for $\theta$ is 2 and $\gamma_{t-1}$ is the previous stepsize which reflects the curvature. Therefore $\gamma_{\max}$ is not fixed anymore. 2. Note that in ResNet 32, more than 80 percent of the parameters (denoted by $x$) are scale invariant, that is $f(x)=f(cx)$ for any $c>0$. This implies that $||\nabla^2 f(x)||=\frac{1}{||x^2||}||\nabla f^2(\frac{x}{||x||})||$ which is unbounded from above and thus $f(x)$ is **not** smooth. See more discussions in [1]. Combining 1) and 2), we can clearly see that the iterates generated by SLS may converge to some minimizer with locally increasing curvature and therefore the stepsize diminishes (which is the inverse of the curvature). To sum up, for non-smooth functions, the lower bound might not exist. However, in our work, we consider smooth functions. > Q2. $\ell_{i_t}^\star$ in deep learning We kindly disagree with the comment *AdaSPS uses $\ell_{i_t}^\star$ It would be unrealistic in deep learning.* First of all, $\ell_{i_t}^\star$ is much more readily available than $f_{i_t}^\star$ and this is already an improvement over SPS. Second of all, in most deep learning tasks, the loss functions are always lower bounded by zero, i.e. $\ell_{i_t}^\star=0$. The examples include but are not limited to: 1. Regression: Mean Square Error, Mean Absolute Error, Huber loss and log-cosh loss, 2. Classification: Cross-entropy loss, Hinge loss, etc. These losses are commonly used in machine learning and deep learning tasks. > Q3. Examples satisfying the interpolation condition If the machine learning model is highly expressive and can fit the training dataset completely, then normally the interpolation condition holds. For convex problems, a classical example would be binary classification using RBF kernels without regularization and with logistic loss. For non-convex problems, training an overparameterized neural network is a classical example. > Q4. Since deep learning is non-convex, Why do we still study convex problems? We thank the author to raise this question. First, the development of the 'appropriate' optimization theory for deep learning models remains an elusive challenge. These complicated models are often non-convex, non-smooth, and non-differentiable at their minimizes (e.g. relu activation and normalization). Traditional analysis investigates convergence towards stationary points with zero gradient norms. However, for deep learning, this conventional framework might not be useful. Therefore, the exploration of optimization theory for deep learning models remains an open question. Second, solving open questions in stochastic convex optimization boosts theoretical understanding and scientific development. Machine learning is not only about deep learning. Many machine learning problems are indeed convex such as SVM, logistic regression, and linear regression. Even for these simple models, there still exist tricky problems that we cannot solve yet. Providing solutions or insights into these problems will help us understand more complicated models and this is essentially how science should proceed. > Q5. Numerical results for training DNNs As mentioned earlier, we provide solid evidence in Appendix G that our proposed stepsize works well on deep learning tasks. Therefore, we think it is a promising stepsize that needs further study in the future. > Remark It is important and crucial to fundamentally understand the behavior of adaptive algorithms in st-convex and convex functions before we advance it to a more complicated case. We would like to highlight the contributions in our work. First, we propose two adaptive stepsizes which have both strong theoretical guarantees and practical performance, under the weakest assumptions, which improves upon some of the drawbacks of the previous methods. Essentially, the user can reliably apply our methods to the problems, without the need to know problem-dependent parameters and the underlying interpolation conditions that are often difficult to assess. This serves as a good step towards a totally automatic adaptive stepsize. Besides, they also show strong performance in deep learning experiments. Secondly, Polyak and line-search type methods all fail to be combined with the classical VR framework. However, we break this limitation and manage to accelerate these two complicated stepsizes. We believe our newly proposed framework may encourage more personalized VR techniques in the future. If you agree that we managed to address all issues, please consider raising your mark. If you believe this is not the case, please let us know so that we have a chance to respond. We really appreciate that. [1] Robust Training of Neural Networks Using Scale Invariant Architectures, ICML 2022 --- Rebuttal Comment 1.1: Title: Q1. line-search: lower bound Comment: Thank you for your replies. I do not understand your replies for the lower bound of step-size. Let us consider $f(x) = x^2$, $\rho = 0.1$, and $\gamma_\max = 1$. Then, $L=2$ and $\min( \frac{1-\rho}{L}, \gamma_\max ) = \min( 0.45 ,1 ) = 0.45$. The Armijo condition $f(x - \gamma f'(x)) \leq f(x) - \rho \gamma |f'(x)|^2$ implies that $\gamma \leq 1 - \rho = 0.9$, where $x$ is not the global minimizer of $f$. Hence, $\gamma$ satisfying the Armijo condition is, for example, $\gamma = 0.1$. Moreover, $0.1 < \min( \frac{1-\rho}{L}, \gamma_\max ) = 0.45$. Even if $\gamma_\max = 10^{-10}$, $\gamma = 10^{-11} \leq 1 - \rho = 0.9$ satisfies the Armijo condition. Accordingly, a step-size $\gamma$ satisfying the Armijo condition is not always lower bounded by $\min( \frac{1-\rho}{L}, \gamma_\max )$ (L528). --- Reply to Comment 1.1.1: Title: Q1. line-search: lower bound Comment: We thank the reviewer for the reply. We would like to refer to Algorithm 4 for a more detailed procedure of Armijo line-search. If we run Armijo line-search, then we need to provide $\gamma_{\max}$. We first initialize $\gamma$ as $\gamma_{\max}$ (line 1), then we check if $\gamma$ satisfies the line-search condition (line 2). If not, then we decrease $\gamma$ by a factor of $\beta$ and repeat. Note we assume $\beta\in[0.5,1)$. In your example, if you initialize $\gamma_{\max}=1$, then the returned $\gamma$ should be at least 0.5 which is larger than $\min(\frac{1-\rho}{L},\gamma_{\max})$=0.45. We would like to highlight that the $\gamma_{t}$ in equation (4) is returned by Armijo line-search procedure rather than an arbitrary number that satisfies Armijo condition. We hope this solves your concern. Thank you --- Rebuttal Comment 1.2: Title: Q3. Examples satisfying the interpolation condition Comment: Thank you again for your replies. I might misunderstand the definition of the interpolation condition. Could you provide the definition of the interpolation condition? (Probably, there is no definition of the interpolation condition in the paper) --- Reply to Comment 1.2.1: Title: Q3. Examples satisfying the interpolation condition Comment: **The interpolation condition is clearly presented on line 122.** We say a problem is interpolated if $\sigma_{f,1}^2=f^\star-\mathbb{E}[f_{i}^\star]=f^\star-\frac{1}{n}\sum_{i=1}^n f_{i}^\star=0$. To make it extra clear, from the definition of interpolation: $\frac{1}{n}\sum_{i=1}^n f_{i}^\star=f(x^\star)=\frac{1}{n}\sum_{i=1}^n f_i(x^\star)$ where $x^\star$ is a minimizer of $f$, we can deduce $f_i(x^\star)=f_i^\star$ for any $i\in[n]$ since $f_i(x^\star)\ge f_i^\star$. In other words, interpolation means the global minimizer of $f$ is also a minimizer of each individual function $f_i$. Thank you --- Rebuttal Comment 1.3: Title: Q5. Numerical results for training DNNs Comment: Thank you again for your comments. I check Appendix G. Checking https://github.com/weiaicunzai/pytorch-cifar100 , the test accuracy of training ResNet-34 on CIFAR-100 using SGD-Momentum is almost 75 \%. The numerical results in the paper are insufficient (I think that the parameter setting would be insufficient in the paper). Moreover, the authors' proposed methods would not be nice since the scores of the methods are less than 75 \%. --- Reply to Comment 1.3.1: Title: Q5. Numerical results for training DNNs Comment: We kindly disagree. We do no use any weight decay (regularization) in our experiments. We report the performance of each method under the same setting with zero weight decay to show their effectiveness of minimizing the original loss function.
Summary: The paper aims to propose robust methods that achieves optimal rates in both strongly-convex or convex and interpolation or non-interpolation settings. Specifically, they propose AdaSPS, a modification of $SPS_{max}$ with an AdaGrad-like denominator (replacing the gradient norms with function values) in the stepsize, and AdaSLS, a combination of AdaGrad-Norm and line search methods. - For convex functions, AdaSPS and AdaSLS achieve a $O(1/\epsilon^2)$ convergence rate assuming individual smoothness and bounded domain. When interpolation is assumed, both methods get rid of the bounded domain assumption (but additionally require individual convexity and AdaSPS requires the exact minimal function values) and achieve a $O(1/\epsilon)$ convergence rate. - For strongly-convex functions, individual strong-convexity and smoothness are required. AdaSPS and AdaSLS achieve a $O(1/\epsilon^2)$ rate without interpolation. With interpolation, they achieve linear convergence rates. - Furthermore, the author combine the proposed methods with variance reduction techniques, and improves the rate for strongly-convex and convex settings to $O(1/\epsilon)$ without interpolation. The proposed methods are also evaluated by numerical experiments. Strengths: 1. The paper has clear logic and is well-written, making it easy to follow. 2. In all considered settings (convex/strongly-convex and interpolation/non-interpolation), the proposed AdaSPS and AdaSLS match the rate of well-tuned SGD in all settings except for the case when we have strongly-convexity and non-interpolation. Importantly, AdaSLS achieves this without requiring knowledge of any problem-dependent parameters. 3. The intuition behind the proposed variance reduction framework is interesting, and could potentially motivate new algorithms. Weaknesses: 1. The related work on interpolation is somehow insufficient. One very relevant work [1] is missing. Importantly, [1] also proposed combining AdaGrad with line search, similar to AdaSLS, and provided theoretical guarantees in convex setting. The difference is that AdaSLS uses a minimum operator for all past stepsizes. Usually, non-diminishing stepsizes work better in practice, so it would also make sense to compare them in the experiments. 2. I cannot find any theorems or proofs that show the results of AdaSVRPS/AdaSVRLS in strongly-convex and convex settings with interpolation, as listed in Table 1. This is the reason why I give a 1 in the soundness assessment. 3. Without variance reduction, the paper claims that their algorithms match the convergence rate of SGD in many settings, however, the assumptions are different. Individual smoothness and individual (strongly-)convexity are additionally assumed in many theorems in this work, making it hard to compare them to classical results of SGD (also AdaGrad-Norm). I wonder if there are lower bounds for the individual (strongly-)convexity settings. With variance reduction, Theorem 8 also requires individual convexity, while the compared AdaSVRG does not. Minor issues: 4. Table 1 could contain results of AdaSVRG and SARAH for a more comprehensive comparison of adaptive variance reduction methods. For the interpolation strongly-convex setting where we have linear convergence, the dependence on $\kappa$, the condition number, is also important. 5. In Theorem 5, $T_p$ and $T_l$ are not defined. According to the results in the appendix, they depends on $\epsilon$ (of order $1/\log(1+\epsilon)$, or nearly $1/\epsilon$, if $c_p$ is chosen poorly). 6. Usually the scalar version of AdaGrad is referred as AdaGrad-Norm. Calling it AdaNorm is not common, especially when there are other algorithms called AdaNorm. **References:** [1] Vaswani, Sharan, et al. "Adaptive gradient methods converge faster with over-parameterization (but you should do a line-search)." arXiv preprint arXiv:2006.06835 (2020). Technical Quality: 1 poor Clarity: 4 excellent Questions for Authors: I would appreciate it if the author could address the concerns listed as weaknesses 1-3. Additionally, I wonder if we could change the added function value gaps (under the square root) in the denominator of AdaSPS stepsize to the squared norm of gradients (like AdaGrad) and obtain a similar result. Is the use of function value gap necessary? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 4 excellent Contribution: 2 fair Limitations: I think the main limitation is the sub-optimality in the non-interpolation strongly-convex setting, which is also a known hard problem for adaptive methods. This is metioned in the Conclusion and future work section of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive remarks. We provide answers below. (For space limit, We omit the big O notation and write V-SGD for Vanilla SGD, AGN for AdaGrad-Norm, IC for individual convexity, ISC for individual st-convexity and PCV for potential camera-ready version). > W1. Paper on AdaGrad+line-search We apologize for missing this paper and will include it in our PCV. Since we study adaptive stepsize for SGD, let us discuss the scalar version of this algorithm. For AGN with constant stepsize, the best $\eta$ should be of order $\Theta(D)$ which gives the optimal rate of $LD^2/T+\sqrt{L}D\sigma/\sqrt{T}$. Note this method multiplies it by a scalar of order $1/L$ which results in a suboptimal rate $D^4L^3/T+D^2L^{3/2}\sigma/\sqrt{T}$ (See Theorem 2 in this paper and $\eta_{\max}$ is usually larger than $\frac{1}{L}$). In contrast, our carefully designed AdaSLS multiplies the inverse of the curvature $\gamma_t$ by the refined scaling term $1/c_l\sqrt{\sum_{s=0}^t\gamma_s||\nabla f_{i_s}(x_s)||^2}$ instead of $1/\sqrt{\sum_{s=0}^t||\nabla f_{i_s}(x_s)||^2}$ (this paper) which allows to obtain the optimal convergence rate. In the case of interpolation + IC, they can remove the constraint on $\eta_{t+1}\le\eta_t$ to improve practical performance. For AdaSLS, we can also remove the $\min$-operator, and the fast rate is still preserved (our proof does not require $\min$-operator in this case). However. since we focus on the setting where the underlying interpolation condition is unknown, it is only fair to compare these methods with conservative constraints. This paper requires $\eta_t\le\eta_0$, and thus the algorithm is no better than a well-tuned AGN in practice. Since we have compared our stepsizes with the best-tuned AGN, we argue that it is not necessary to add this method to the experiments. > W2. VR, typos We thank the reviewer for finding these typos. For the VR methods, in all the four cases listed in Table 1, we will refill them with $\tilde{\mathcal{O}}(n+\frac{1}{\epsilon})$. Theorem 8 covers the convex and interpolation setting as well as the two st-convex settings. For the st-convex problems, it is a known hard problem to prove adaptive VR methods converge linearly. Since we do not claim any contribution of our new VR methods to the st convex setting and the word robustness is referred to the adaptive stepsizes, we really appreciate it if you consider raising your soundness score. > W3. Assumptions We thank for your careful discussion. We first agree V-SGD only requires smoothness of $f$. But individual smoothness assumption is also standard and can be satisfied in practice. Under this assumption, for all the cases, **the assumptions required for AdaSP/LS are at least the same or even weaker than any of the previous adaptive methods**. 1. $f$ is convex + interp. V-SGD with constant stepsize/AdaSP/LS/AGN converges as $1/T$. SPS/SLS/DecSPS requires IC and DecSPS converges slowly. 2. $f$ is convex + non-interp. V-SGD with decreasing stepsize/AdaSP/LS/AGN converges as $\sigma/\sqrt{T}$. SPS/SLS cannot converge. DecSPS needs IC. 3. $f$ is convex + IC + interp. AdaSP/LS can remove bounded iterates assumption while no such result exists for AGN. 4. $f$ is st-convex + ITC + non-interp: AdaSP/LS/DecSPS can remove bounded iterates assumption while no such result exists for AGN. 5. $f$ is st-convex + interp: (In the attached pdf, we replaced ISC with the classical IC, and the original linear rate is preserved.) **In this case, the current adaptive methods require one more assumption than V-SGD**. V-SGD converges as $\exp(-\mu T/L)$. With additional IC, SPS/SLS converges as $\exp(-\mu T/L)$. AdaSP/LS also converges linearly, the constant of which depends on the first iterate and is usually worse than SPS/SLS (see attached). AdaGrad-Norm cannot converge linearly without knowing $L$. VR method + $f$ is convex: AdaSVRP/LS requires IC while SARAH/AdaSVRG do not. We will clearly state this requirement in our PCV. However, since SPS/SLS exactly needs individual convexity in the interpolation setting. we think this assumption cannot be removed and we believe this is a weakness of these methods compared with AdaGrad. But in practice, the IC assumption is often satisfied. > Q1. Can we replace added function value gaps with the squared norm of gradients There are a few reasons why we cannot. 1) scaling issue. Suppose the exact Polyak stepsize is used. This quantity is of scale $1/L$. If we multiply it by AGN, then the scaling gives a suboptimal rate. 2) For convex problems, the error term $f_{i_t}(x_t)-f_{i_t}^\star$ cannot be upper bounded by $||\nabla f_{i_t}(x_t)||^2$ which might lead to divergence. 3) If we relax it to $\frac{f_{i_t}(x_t)-\ell_{i_t}^\star}{||\nabla f_{i_t}(x_t)||^2}$. Then the error caused by $\ell_{i_t}^\star$ cannot be compensated by the growing squared norm of the gradients at a correct rate. But it can be controlled by the accumulated $f_{i_t}-\ell_{i_t}^\star$ used in AdaSPS. > M1. A refined version of Table 1 We thank for the suggestions. We will add the results and its relaxed assumption for AdaSVRG. For SARAH, since it uses a different gradient estimator and its stepsize is constant, we think it is a bit unfair to add it to Table 1 for adaptive stepsizes. For ISC problem, we agree that the constant is important and we will add it for clarity. > M2. $T_p$ and $T_l$ We agree that if $c_p$/$c_l$ is chosen poorly, the convergence can be sublinear. Therefore, we recommend the theoretically suggested $c_p$/$c_l$ indicated in the Theorem to avoid the potential slowdown. These values only depend on the first iterate. In other words, they are parameter-free. > M3. A correct name: AdaGrad-Norm We will do that, thanks. We thank you again for the great reviews. If you agree we addressed the main concerns, please consider raising your mark. If you believe not, please let us know. We really appreciate that. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanation. If AdaSVRPS/AdaSVRLS can only achieve $\widetilde{\mathcal{O}}(n + \frac{1}{\epsilon})$ for all settings, then they seem less compelling. The author claimed that "This is a significant contribution, as trivial combinations of existing variance-reduction techniques with Polyak stepsizes or line-search does not work." Why is it interesting to bring line search or Polyak stepsize to variance-reduction in the first place if the rates are no better than existing methods? Also it seems a bit unconventional to consider variance-reduction and interpolation at the same time, given there is no benefit in improving the rate when compared to considering them separately. In addition, I noticed that in Proposition 1, to achieve the claimed goal of no requirement for unknown parameters, both $c_l$ and $c_p$ should be set according to the initial stochastic gradient or function value. However, in the proof of Theorem 1, when taking expectations, they are treated as constants. I am not sure whether the proof remains valid when they are random variables. This could also be an issue for the theorem presented in the rebuttal PDF. --- Reply to Comment 1.1.1: Title: Answers to the first paragraph Comment: Thanks for the great remarks. > If AdaSVRPS/AdaSVRLS can only achieve $\tilde{O}(n+\frac{1}{\epsilon})$ for all settings, then they seem less compelling. 1. We thank for the comment. The word "only" seems a bit strong. There are **no** counter examples that show AdaSVRPS/AdaSVRLS **cannot** achieve linear convergence in the strongly-convex settings. In numerical experiments, they show such a linear rate (for instance, see the second plot in Figure 1). However, the proof itself is a known hard problem due to the many bottlenecks in the current proof framework. That being said, the VR methods themselves including AdaSVRG/AdaSVRLS/AdaSVRPS might still be able to accelerate in this case. We leave this as a future work. 2. The only current adaptive VR method is AdaSVRG. First, three methods all show competitive performance in practice. Second, note that AdaSVRPS/AdaSVRLS has freedom to decide how often to update the full gradient depending on the computational power so that convergence might be faster in practice. This can be done by choosing different $p_t$ ($p_t$=1 reduces to GD). AdaSVRG needs to carefully determine the number of stages and the inner-outer-loop size to guarantee convergence. Therefore, AdaSVRPS is no worse than AdaSVRG. > Why are these two methods are still interesting if the rates are no better than existing methods? 1. Bringing line-search to VR was an interesting open question in the last decade. Schmidt et al. [1] and Mairal [2] provide promising empirical results by setting the stepsize in VR using line-search. However, the theoretical guarantees were elusive. Dubois-Taine et al. [3] first provide a counter-example that shows an intuitive line-search method with VR fails to converge, which brings less hope to this method. However, we show that in fact, doing line-search on the individual biased function $f_{i_t}$ provides misleading curvature information, which makes the classical VR method fail to work. We address this issue by adding a correction term that contains global information to $f_{i_t}$ and then doing line-search on the variance-reduced $F_{i_t}$. This approach breaks the previous limitation and bias, which might encourage faster and better VR methods with line-search and Polyak-stepsize. 2. The proposed VR framework is general. Apart from Polyak-stepsize and line-search, one can also apply AdaGrad stepsize to the functions $F_{i_t}$ and the resulting algorithm enjoys the same convergence guarantee. The same approach can be applied to the stepsizes proposed by Malitsky et al.[4], Lvgi et al.[5], etc. Therefore, it would be interesting and promising to use this framework to develop better and faster VR algorithms (for instance, applying second-order methods or momentum on $F_{i_t}$). > Consider variance-reduction and interpolation at the same time We apologize for the confusion in Table 1. The separation is used for the adaptive stepsizes for SGD and we do not aim to consider VR and interpolation at the same time. We will remove the VR methods from Table 1 to make it extra clear. [1] Minimizing finite sums with the stochastic average gradient. Mathematical Programming [2] Optimization with first-order surrogate functions. ICML 2013 [3] SVRG meets AdaGrad: Painless Variance Reduction, Machine Learning 2022 [4] Adaptive Gradient Descent without Descent, ICML 2020 [5] DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule, ICML 2023 --- Reply to Comment 1.1.2: Title: Answers to the second paragraph Comment: > $c_p$ and $c_l$ depending on the first stochastic information that might make the proof invalid. We appreciate the great observations. **We apologize that we did make a mistake in the last step of the proof. But this can be easily fixed.** We take AdaSPS as an example. Under individual convexity and interpolation assumptions, we get equation (7) from the rubuttal pdf. $||x_{t+1}-x^\star||^2\le||x_t-x^\star||^2-\frac{1}{(2c_pL||x_0-x^\star||)^2}\nabla f_{i_t}(x_t)^T(x_t-x^\star)$ Denote $\frac{1}{(2c_pL||x_0-x^\star||)^2}$ by $A$ and plug in $c_p=\frac{c_p^{scale}}{\sqrt{f_{i_0}(x_0)-f^\star}}$ where $c_p^{scale}\ge 1$ is a fixed constant. Since the right hand side depend on the inner product of two RVs, we need carefully take the expectations. > **$f$ is convex.** For any $t\ge 1$, we can take expectation conditional on $i_0$ on both sides and get $E[||x_{t+1}-x^\star||^2|i_0]\le E[||x_t-x^\star||^2|i_0]-AE[\nabla f(x_t)^T(x_t-x^\star)|i_0]$. Using convexity and summing up from $1$ to $T$, we get $\sum_{t=1}^TE[f(x_t)-f^\star|i_0]\le\frac{1}{A}||x_1-x^\star||^2\le\frac{1}{A}||x_0-x^\star||^2$. Take expectation w.r.t $i_0$ on both sides and dividing by $T$, we get $\frac{1}{T}\sum_{t=1}^T E[f(x_t)-f^\star]\le 4L(c_p^{scale})^2E_{i_0}[\frac{||x_0-x^\star||^2}{(f_{i_0}(x_0)-f^\star)}]\frac{L||x_0-x^\star||^2}{T}$. > **$f$ is st-convex.** For any $t\ge 1$, we have $E[||x_{t+1}-x^\star||^2|i_0]\le(1-\mu A)E[||x_{t}-x^\star||^2|i_0]$. Unrolling, for any $T\ge 1$, we get $E[||x_{T+1}-x^\star||^2|i_0]\le(1-\mu A)^T||x_1-x^\star||^2\le(1-\mu A)^T||x_0-x^\star||^2$. Take expectation w.r.t. $i_0$, we get $E[||x_{T+1}-x^\star||^2]\le E_{i_0}[(1-\mu A)^T]||x_0-x^\star||^2$. (Note another way to handle this issue is to first use the bound: $c_p\le\frac{c_p^{scale}}{\sqrt{\min_{i_0} f_{i_0}(x_0)-f^\star}}$ before taking the expectation. But this is too pessimistic.) We will make it clear in the manuscript. We thank again for the great remarks. We hope our answer addresses your concerns.
Summary: This paper introduces Adagrad-norm type update into stochastic line search (SLS) and stochastic Polyak step size (SPS) (namely AdaSLS and AdaSPS) that guarantee convergence in non-interpolating convex scenario with a $\mathcal{O}(\frac{1}{\epsilon^2})$ rate. Then it shows that AdaSLS and AdaSPS converge linearly when interpolation holds under strong convexity. Finally, it combines variance reduction with AdaSLS/AdaSPS to improve the convergence to $\mathcal{O}(n+\frac{1}{\epsilon})$ for convex losses. Strengths: 1. Overall, the paper is well-written. The comparisons with existing literature results are thorough. 2. The introduction of loopless variance reduction, together with AdaGrad and SPS/SLS into a single framework is novel. Weaknesses: 1. There is no improvement in the convergence rates. DecSPS was introduced to make SPS converge in non-interpolating settings by forcing the step size to be monotonic-decreasing. The proposed AdaSPS essentially replaces the sequence $c_k$ in DecSPS [1] with Adagrad-Norm, and the step size is also monotonically decreasing, resulting in the same rate as DecSPS. In the interpolating settings, SLS [2] and SPS [3] already converge without the step size being monotonic decreasing. I don’t see the improvements in AdaSPS/AdaSLS over previous methods under interpolation or non-interpolation. 2. In the strongly-convex interpolating case, AdaSPS/AdaSLS converges linearly with some requirements given in Corollary 6. SPS and SLS have the same convergence rate without those additional requirements. In fact, the constants $c_p$ and $c_l$ associated with AdaSPS and AdaSLS respectively depend on the sample at first iteration. I don’t see how this is always satisfied. Does it mean that $c_p$ and $c_l$ need to change every time when running the algorithm? In addition to this, AdaSPS/AdaSLS assumes that each function is strongly convex, this is a very strong assumption that does not appear in the analysis of SPS and SLS. Does it mean only optimizing one function is sufficient as they all share the same unique global optimum under interpolation? 3. The experiments are not promising. Figure 1 just shows that AdaSPS performs worse than SPS under interpolation, and performs worse than or similarly to DecSPS under non-interpolation. If there already exists some better alternatives in either setting, what is the benefit of using AdaSPS? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I vote for rejection due to the following reasons: 1. No improvements in rates over previous works in either interpolation or non-interpolation scenario. 2. Weak empirical results. The gain in the proposed method is insignificant. 3. The analysis requires some strong assumptions and additional requirements to obtain the same rate as previous works. [1] Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution [2] Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates [3] Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the remarks and criticisms. Before responding to each point, we kindly want to highlight our first contribution to address your main concern. **We focus on the settings where the underlying interpolation condition is unknown to the users.** Having a **robust** (that can adapt to the interpolation condition) and theoretically-grounded algorithm is important in many cases such as in the federated learning setting as mentioned in the introduction part. **Furthermore, it is not easy to determine whether or not a model is effectively interpolating the given dataset.** Consider the rcv1 dataset where the dimension of features are twice larger than the number of data points. A logistic regression model is considered overparameterized. But the features are actually sparse and it is not interpolated. Therefore employing SPS/SLS cannot guarantee convergence. **Developing a robust and reliable algorithm will be of great convenience for the users in practice.** As summarized in Table 1, our newly proposed AdaSPS/AdaSLS are the **first** adaptive stepsizes to have such strong **robust** theoretical guarantees in all cases without knowledge of the Lipschitz constant. We now provide answers to each point. >W1. There is no improvement in the convergence rates. We kindly disagree. The previous adaptive methods are the best on a certain range class of problems, while our method achieves the best known rates in all these scenarios. From Table 1, AdaSPS/AdaSLS achieve both fast convergence rates in the interpolation settings like SPS/SLS and in non-interpolated settings like DecSPS. **These asymptotic rates are already optimal and cannot be further improved.** (except for strong convexity without interpolation). The denominator defined in AdaSPS/AdaSLS is essentially the key to having such an adaptivity to the underlying interpolation condition. Note that DecSPS artificially incorporates $\mathcal{O}(1/\sqrt{t})$ decreasing rule which results in slow convergence with interpolation while the denominator designed for AdaSPS/AdaSLS actually can be upper bounded by a constant if interpolation holds and will go to infinity if not. > W2. Assumptions We thank the reviewer for the comments. In the attached pdf, we replaced the individual strong-convexity assumption in Theorem 5 with the classical individual convexity. The original linear rate complexity is preserved. We kindly disagree with the comment “The analysis requires some strong assumptions and additional requirements to obtain the same rate as previous works.” **In all the cases, our assumptions are at least the same or even weaker than any of the previous adaptive methods!** . 1. Convex + interpolation: SPS/SLS/DecSPS assumes individual convexity while ours and AdaGrad-Norm only assume $f$ is convex (See Theorem 1 with noise = 0). 2. Convex + interpolation + individual convexity: we can further remove the bounded iterates assumption while AdaGrad-Norm cannot. 3. Convex + non-interpolation: DecSPS assumes individual convexity while ours and AdaGrad-Norm only assume $f$ is convex. 4. st-convex + non-interpolation + individual st-convexity: ours and DecSPS remove the bounded iterates assumption while AdaGrad-Norm cannot. 5. st-convex + interpolation: SPS/SLS and ours all assume individual convexity. While AdaGrad-Norm cannot show linear convergence without knowledge of $L$. DecSPS only has $\mathcal{O}(1/\epsilon^2)$. This shows again the **strong robustness and the weakest assumptions required** for our stepsizes. > W3. How to set $c_p$/$c_l$ As illustrated in the numerical evaluation section, we used the theoretically justified hyperparamter $c_p^{\text{scale}}$ which essentially only depends on the first iterate. For instance, let us fix $c_p^{\text{scale}}=1$. Then $c_p=\frac{1}{2\sqrt{f_{i_0}(x_0)-\ell_{i_0}^\star}}$ and it will not change during the iterations. This quantity provides the right scaling correction to the stepsize and brings much convenience to the user experience since they do not need extra tuning. > W4. The experiments are not promising? We kindly disagree: Our synthetic experiments are designed to illustrate the **robustness** of our proposed algorithms. Let us compare these algorithms closely. We first agree that AdaSPS is not as competitive as SPS with interpolation since AdaSPS is a non-increasing stepsize. However, AdaSPS shows the desired linear and sublinear convergence for st-convex and convex problems while DecSPS converges much more slowly. In the non-interpolation regimes, SPS **cannot** converge and has a big neighborhood error. **AdaSPS outperforms DecSPS in all the experiments** (See Figure 1 and 2) **only except** the second plot in Figure 1 which we can apparently choose a larger $c_p$ to improve its performance. (Note we fix the same $c_p^{\text{scale}}=1$ across these experiments to show the robustness). As such, AdaSPS can be seen as a direct replacement for DecSPS (both in theory and in practice). If we are certain that the underlying problem is interpolated and we know the exact optimal function value, then we definitely recommend using SPS. Otherwise, AdaSPS/AdaSLS is always reliable and offers a better choice. We would like to highlight another importance of introducing AdaSPS/AdaSLS. SPS/SLS/DecSPS all fail to be incorporated into the VR framework because they are not robust. Our work gives a very promising direction that even these difficult stepsizes can also be combined with VR techniques as long as we can 'make them robust'. This may motivate more personalized VR techniques in the future. We believe that our newly proposed stepsizes and the novel VR framework are good contributions to the community for many potential extensions. If you agree that we managed to address all issues, please consider raising your mark. If you believe this is not the case, please let us know so that we have a chance to respond. We really appreciate that. --- Rebuttal 2: Title: Reply Comment: Thanks for the rebuttal. I have raised my score to 4. But I am not fully convinced by the novelty and significance of this work.
Summary: This paper proposes two new variants of SPS and SLS, called AdaSPS and AdaSLS, which provide convergence in non-interpolation settings for convex and strongly convex functions when training over-parameterized models. AdaSLS requires no knowledge of problem-dependent parameters, and AdaSPS requires a lower bound of the optimal function value as input. In addition, the paper studies a new variance reduction technique for AdaSPS and AdaSLS and proves the gradient complexity for convex functions, which improves upon the rates of AdaSPS and AdaSLS without variance reduction in the non-interpolation settings. This matches the fast rates of AdaSVRG and SARAH without the inner-outer-loop structure, which is easier to implement and analyze. The authors provides numerical experiments on synthetic and binary classification on LIBSVM datasets. Strengths: The two algorithms AdaSPS and AdaSLS are interesting. The authors provide the convergence analysis for them, especially in non-interpolation setting the algorithms attained the classical convergence rate for convex functions. The variance reduction technique has the same complexity in expectation for convex functions as the rate of AdaSVRG and SARAH. Weaknesses: The theoretical rate has comparison with SARAH but it is not included in the experiment. The variance technique and the one loop structure is not entirely new since it already appeared in the PAGE algorithm. The authors may consider discussing that as well. A natural question is why practitioners should use the new variance reduction algorithms while it is more complicated than the well established methods in convex settings. Thus more extensive experiments would help to demonstrate the effectiveness of the algorithms. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Since the algorithm requires more parameter than classical methods, could you explain how they are chosen in the experiments/ suggested in practical settings? e.g. $\mu_F$, $c_p$, $c_l$, $\gamma_t$ , $p_t$? How the lower bound information of 0 may affect the performance of the algorithm (theoretically and empirically), since the theory uses the correct lower optimal values. --- I thank the authors for your rebuttal. My recommendation remains the same. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the remarks and suggestions on our paper. You can find our replies below: > Comparison with SARAH and PAGE We thank the reviewer for mentioning PAGE. We will include the discussion on PAGE in the potential camera-ready version. We would like to highlight that **there is no direct connection of our methods to SARAH and PAGE.** We classify SVRG, SARAH, and PAGE into the first group, as they propose different gradient estimators for variance reduction. To guarantee convergence, the knowledge of $L$ is required to set their constant stepsizes. Conversely, AdaSVRG/AdaSVRPS/AdaSVRLS belongs to a different group where they fix the gradient estimator first (for instance, they all use SVRG) and the focus is to set the stepsize adaptively without the need to know problem-dependent parameters to guarantee convergence. **Therefore, the works in these two groups are orthogonal.** > Technical novelty We do not claim that the one loop structure and probabilistic update of the full gradient is new since it has already been discussed in Dmitry [19] and PAGE as you mentioned. However, our proposed VR framework based on the moving sequence of random functions is new as it allows some difficult adaptive stepsizes such as Polyak and line-search type algorithms to accelerate. With the common VR technique, these stepsizes fail to converge. Another novelty is that with our decreasing probability strategy, the inner-outer-loop structure used in AdaSVRG can be removed and this strategy is the key to providing the optimal convergence rate in the convex setting. In the other works on loopless VR technique, the probability is set to be a constant. > SARAH experiment We thank the comment on SARAH and we have included it in the attached pdf under the same experimental setting as in Section 5. But as we mentioned earlier, AdaSVRPS/AdaSVRLS/AdaSVRG/SVRG all use the same gradient estimator but with different stepsizes. Consequently, we think it is fair to compare them in the experiments. Comparison of different gradient estimators is a bit orthogonal to the focus of this work. > How to set hyper-parameters for AdaSVRPS/AdaSVRLS in practice For both AdaSVRPS and AdaSVRLS, the key parameters are $\mu_F$ and $c_p^{\text{scale}}/c_l^{\text{scale}}$. Note that the inverse of the curvature of the random function $F_{i_t}$ is upper bounded by $\mathcal{O}(\frac{1}{\mu_F})$ ($F_{i_t}$ is at least $\mu_F$-st covex). Therefore, the smaller the $\mu_F$, the larger the maximum value of the stepsize could reach. In practice, we can set it to be $10^{-4}$ for a potential aggressive stepsize in the case where $d>n$. Otherwise, we can set it to be $1$ for a conservative stepsize. $c_p^{\text{scale}}/c_l^{\text{scale}}$ controls the scale of the Polyak stepsize/line-search stepsize. In other words, the adaptive stepsize is upper bounded by the standard Polyak stepsize/line-search stepsize multiplied by the inverse of $c_p^{\text{scale}}/c_l^{\text{scale}}$. A reasonable choice would simply be $c_p^{\text{scale}}/c_l^{\text{scale}}\in[0.5,1,2]$. The smaller the number, the more aggressive the stepsize. For the parameters inside the line-search method, we always fix $\gamma_{\max}=10^3$ or $\frac{1}{\mu_F}$, $\beta=0.8$ and $\rho=1/2$. The last parameter $p_t=\frac{1}{at+1}$ is very flexible. The smaller $a$, the more frequent of computation of the full gradient. A standard choice would be $a=0.1$. However, depending on the computational power, one can freely choose this number. In constrast, the inner-outer-loop structure of AdaSVRG does not allow arbitrary full gradient update frequency. Indeed, AdaSVRG fails to converge with inner-loop size being one and fixing $g_t = \nabla f(x_t)$. All the experimental details can be found in Appendix F. > Impact of the lower bound of zero for AdaSVRPS in practice Yes, for AdaSVRPS, we use the exact $F_{i_t}^\star$ in theory. If we replace it with $\ell_{i_t}^\star$ (a lower bound of $F_{i_t}$), then an additional slow down term $\mathcal{O}(\frac{\sigma}{\sqrt{T}})$ will occur in theory. The proof can follow the routine for AdaSPS. However, in practice, it suffices to use $\ell_{i_t}^\star+\min_x\{x^T(\nabla f(w_t)-\nabla f_{i_t}(w_t))+\frac{\mu_F}{2}||x-x_t||^2\}$ where $\ell_{i_t}^\star$ is a lower bound for $f_{i_t}^\star$ which is zero normally. Note that we use this lower bound for all our experiments and AdaSVRPS always shows competitive performance compared with a well tuned SVRG. Therefore, we believe using a lower bound has no impact in practice and sometimes can even make the algorithm behave more aggressively (since the stepsize is larger than needed). > Practical consideration The previous well-established VR methods including SVRG/SARAH/PAGE requires the knowledge of Lipschitz constant to guarantee convergence. AdaSVRG needs to predefine the target accuracy $\epsilon$ to design the number of stages and the inner-outer-loop size. Also, the arbitrary full gradient update frequency is not supported. While AdaSVRPS/AdaSVRLS requires more hyper-parameters, one can simply set $\mu_F=1$ and $c_p^{\text{scale}}/c_l^{\text{scale}}=1$, and the algorithms can reliably converge at a correct rate of $\mathcal{O}(1/T)$. The user also has the freedom to adjust these parameters to gain more aggressive practical performance. For the two adaptive stepsizes, AdaSPS/AdaSLS brings much convenience to the user experience since they do not need extra tuning of the stepsize (simply setting $c_p^{\text{scale}}/c_l^{\text{scale}}=1$ is enough for robust convergence). We believe that our newly proposed stepsizes and the novel VR framework are important contributions to the community as this may motivate more personalized VR techniques in the future. If you agree that we managed to address all issues, please consider raising your mark. If you believe this is not the case, please let us know so that we have a chance to respond. We really appreciate that.
Rebuttal 1: Rebuttal: Thanks to all reviewers for examining our manuscript and help with improving our paper. We appreciate the constructive comments from the reviewers, and we address all raised issues via individual comments. We like to highlight that: - **We propose the first adaptive methods that simultaneously achieve optimal asymptotic rates in both strongly-convex or convex and interpolation or non-interpolation settings. This is a significant contribution, as it is not easy to determine whether or not the interpolation condition holds for a given problem (without solving it).** We work on smooth and convex optimization. We focus on the practical scenario where the underlying interpolation condition of the considered problem is unknown to the users. We propose two new robust stepsizes based on Polyak stepsize and line-search. These are the first adaptive methods that simultaneously achieve optimal asymptotic rates in both strongly-convex or convex and interpolation or non-interpolation settings (except for the case when we have strongly-convexity and non-interpolation), without requiring knowledge of the Lipschitz constant. Furthermore, AdaSPS only needs a lower bound of the optimal function value and AdaSLS is completely parameter-free. Under the standard individual smoothness condition, the assumptions required are the weakest compared with all the previous adaptive methods. We provide theoretically suggested hyper-parameters for these stepsizes, which makes it even more convenient and reliable for the users to apply in practice. Moreover, their competitive performance in deep learning experiments show strong potential for non-convex optimization as well. - **We propose the first variance-reduced methods with Polyak stepsizes or line-search. This is a significant contribution, as trivial combinations of existing variance-reduction techniques with Polyak stepsizes or line-search does not work (see the lower bound proven in [8] for line search, and appendix E for SPS).** Polyak and line-search type methods cannot converge with the classical variance-reduction framework. In this work, we successfully break this long-time barrier and manage to accelerate these two complicated stepsizes. We managed to do that by first proposing a novel variance-reduction framework based on the random proxy function sequence and then applying our robust stepsizes to the new proxy function. We prove the optimal rate of our new algorithms for convex problems. We also introduce the new decreasing probability strategy which allows to remove the classical inner-outer-loop structure and make the proof much easier. Numerical experiments show their strong performance in practice. Reviewer NwRJ found our two algorithms AdaSPS and AdaSLS are interesting. Reviewer a9xP thinks our variance-reduction framework combined with adaptive stepsizes is novel. Reviewer 7mHQ correctly recognizes that our new robust stepsizes have the strong theoretical guarantees and supports AdaSLS as' Importantly, AdaSLS achieves all this without requiring knowledge of any problem-dependent parameters.' Reviewer 7mHQ further agrees that 'the intuition behind the proposed variance reduction framework is interesting, and could potentially motivate new algorithms.' Reviewer Dgvs thinks using these stepsizes and combining them with acceleration can be useful in practice. Based on the questions of the reviewers, we have made the following major updates. 1) We replaced the individual strong-convexity assumption in Theorem 5 with the classical individual convexity and prove the same linear convergence rate. 2) We added the comparison of SARAH in experiments. 3) We fixed the typos of the last two lines within the interpolation columns in the Table 1. To summarize, in this work, we propose two new adaptive stepsizes that enjoy robust convergent guarantees, which serves as a good step towards a totally automatic adaptive algorithm. Our novel VR framework is general and may motivate more personalized variance-reduction techniques in the future. We believe both of them are important contributions to the community. We anticipate an interactive discussion with you, and we will be most happy to answer any remaining questions. Pdf: /pdf/7179b3d5dfa0b03daf5dc4970b55ae99023d5b2b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Reject
Summary: This paper proposed a low-rank way of training LLMs which allows more flexibility than most if not all popular low-rank training methods. It's a parameter-efficient way which only consumes limited gpu usage so it has the potential to be applied to larger models. Strengths: 1. The rationale behind the idea is reasonable. We'd like to have a higher rank models to provide capacity and in the meantime keeping # trainable parameters small. 2. The Introduction reads well. Weaknesses: Overall, I like the idea but somehow there are few concerns that I think needs to be improved in order to accept it. 1. It's not well written as many there are many notation unexplained (L85 r), linkage error (e.g., L106 sec 3), and the algorithm is not well explained in a line by line fashion. Many details are skipped, which leads to the problem that reading the code is the only way to fully capture the proposed method. 2. The motivation is not convincing. Rank(A + B) < Rank (A) + Rank(B) doesn't guarantee Ranks(A + B) will be larger than min( Rank(A) + Rank(B)). Without any further constraints, there is no guarantee L93 is true. We just know the upper bound is higher but to claim it, we need to bound lower-bound, which is not discussed at all. The idea itself is interesting but rationale is wrong. 3. Experiments are limited, as shown in Question sections. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. A fundamental problem with the setup is that LoRA is used for fine-tune a pre-trained LLM "for downstream tasks". Thus, comparing PEFT methods on pre-training performance seems to be meaningless. Authors should compare ReLORA for both pre-train + fine-tuning vs full train + LoRA fine-tune and analyze the ranks or so to make readers understand what's going under the hood. Without this type of analysis (no matter ReLORA performs better or worse), the insights are limited. 2. Following 1, I'd like to see some analysis/experiments on downstream tasks like GLUE/super-GLUE tasks. 3. Jagged Schedule is rather arbitrary. Not sure what's going on if I switch to another dataset, switch models and how it should be combined with fine-tuning. There is no theoretical analysis of the stability, nor experimental experiments to demonstrate it. 4. Another baseline which should be included is training using plain SGD with Jagged Schedule. The above mentioned problems with Adam might totally be redundant given such a complex schedule designed. 5. What's the formal definition of Control? It's not clear how it's performed and frankly speaking, the performance of control is fairly comparable to ReLoRAin Table 2. 6. Selection of r needs to be discussed. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It's discussed in the paper and it reads well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and thorough review of our paper. We sincerely appreciate your insights and would like to address your concerns. Here's our response to your comments: Regarding your statement about PEFT methods: the goal of this paper is to demonstrate that parameter-efficient methods can be effectively used at the resource-demanding and expensive pre-training stage. Comparing ReLoRA for both pre-train + fine-tuning vs. full train + LoRA is definitely interesting, but our goal was to develop a way to use parameter-efficient methods to increase the efficiency of pre-training. ### Motivation Concerning Ranks While there is no guarantee that Rank(A+B) is greater than the minimum of the two ranks (in fact, without additional constraints, there is no guarantee that it is greater than zero), our goal in this section was to provide an intuition for why ReLoRa restarts can increase the rank of the update. We do not claim a theoretical result that ReLoRA yields a higher-rank update. However, we empirically demonstrate that ReLoRA increases the rank of the learned update compared to LoRA (see Figures 3 and 4 in the paper). Our intuition concerning the Adam state affecting the performance of ReLoRA is verified in our ablation study. ### Jagged Schedule We base our schedule on a well-established linear warmup + cosine decay schedule used, for example, in BLOOM, LLaMA, and Pythia models [BigScience Workshop, 2022; Touvron et al., 2023; Biderman et al., 2023]. The "jaggedness" we introduce is not arbitrary and is aimed to avoid loss divergence, as we demonstrate in our ablation study. ### On the Use of SGD Adaptive learning rate optimizers such as Adam are essential to training large neural networks because they mitigate saddle points [Staib et al., 2019], which cannot be mitigated with a learning rate schedule. During ReLoRA development, we considered using second-order deep learning optimization methods such as Shampoo [Gupta et al., 2018]. However, just like Adam, these optimizers are stateful; thus, they exhibit the same issues as Adam in the case of ReLoRA. ### Selection of r The value "r=128" was determined based on our preliminary experiments. Similar to LoRA, we observe little difference between the performance of methods trained with reasonable r. The main distinction is that in fine-tuning on a small dataset, r=16 or even r=8 could be used, but values below 64 demonstrated poor performance. Another consideration for the selection of r – it shouldn’t be more than half the hidden size as, at this point, the number of trainable parameters of full-rank and LoRA-based networks matches. ### Addressing presentation issues The term r at L85 is defined in equation 2 between the lines 85 and 86. We will make this definition more explicit. We apologize for the typo at L106; the reference was intended for "Table 3" instead of "Section 3". We deeply care about the paper's clarity, so please let us know if there are other notations or sections that need further clarification. ### Other comments We appended our fine-tuning results to the rebuttal for further clarity. The "Control" baseline is essentially a transformer network trained conventionally (not in low-rank) but with the same parameter count as the number of trainable parameters of the corresponding low-rank method. Thank you for your valuable feedback. We believe it will significantly improve our paper. We would be grateful for a reconsideration of our paper's score, as we feel our work introduces a novel and impactful result that reparametrization-based PEFT methods can be used to reduce the costs of an expensive pre-training stage. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. But I think my major concerns are not alleviated. 1. Comparing ReLoRA for both pre-train + fine-tuning vs. full train + LoRA is "needed" IMO as we want to know what's the losing expressibility in this case. Your claim is not very convincing to me. 2. GLUE results are skeptical to me. There is no description on how the stuff is run and what's the setup for each category. In particular, the numbers didn't quite match the well-known results. Take MRPC for example, you can easily find out some numbers about .86ish (https://huggingface.co/Intel/bert-base-uncased-mrpc#:~:text=It%20achieves%20the%20following%20results,Accuracy%3A%200.8603) but authors reported .8038. I have no idea how experiments are done. 3. I think the responses on other technical details are reasonable. However, I'd like to see how these integrated into the main text and see how that reads to decide if the score could be increased. Overall I agree with what authors claimed about the novelty and I did indicate that . But somehow that's the only merit and the whole paper is far from ready in terms of readability and soundness of experiments. It seems like other reviewer (Reviewer 4sGe) also feel the experiments are not well established. I encourage the authors to further revise the writeup for a better and easier reading experience which could really benefit the community. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We really appreciate having your perspective, but we don’t believe the comparisons you suggested are relevant for the study. We explain the details below. ### ReLoRA (pre-train + fine-tune) vs. full train + LoRA fine-tune: > 1. Comparing ReLoRA for both pre-train + fine-tuning vs. full train + LoRA is "needed" IMO as we want to know what's the losing expressibility in this case. Your claim is not very convincing to me. It’s not clear what additional hypothesis would be tested by comparing [ReLoRA pre-train + ReLoRA fine-tune] and [full-rank pretrain + LoRA fine-tune] – and whether this hypothesis would be falsifiable. Table 4 in our response already has an apples-to-apples comparison with full-rank training: you can see that [ReLoRA pre-train + full-rank fine-tune] and [full-rank pretrain + full-rank fine-tune] are competitive in all tasks we tested. We don’t see how adding the experiment you suggested can provide additional insight. We appreciate your concern with better understanding the implications of full-rank vs. low-rank training. This is why we looked at the qualitative and quantitative differences in the singular value spectra in Figures 3 and 4. This analysis clearly shows that the ReLoRA spectrum is more similar to full-rank training than to LoRA. ### GLUE Results Discrepancy > 2. GLUE results are skeptical to me. There is no description on how the stuff is run and what's the setup for each category. In particular, the numbers didn't quite match the well-known results. Take MRPC for example, you can easily find out some numbers about .86ish but authors reported .8038. I have no idea how experiments are done. BERT-base model was pre-trained on 128B tokens (accounting for epochs), while our models were trained on 7B tokens. Thus, the difference in the absolute performance of our models is expected and can be unequivocally attributed to more than **10X difference in compute**. Apples-to-apples comparison between models in our study clearly demonstrates that ReLoRA-pretrained models are competitive with full-rank pre-trained models trained on the same amount of data. We used the standard settings for GLUE in all experiments: 3 epochs, lr=2e-5 linearly decaying to zero, no weight decay, batch size 64. We will include all the details into the camera-ready. Please keep in mind that our goal in this study was not to establish SOTA, but to demonstrate the viability of a new pre-training method. We will further clarify this point in the camera-ready. ### Incorporation of Review Feedback > I think the responses on other technical details are reasonable. However, I'd like to see how these integrated into the main text and see how that reads to decide if the score could be increased. We would love to share the updated version with you, in order to evaluate the changes, but the NeurIPS policy **does not allow for it**. Unlike journals, the standard practice in top ML conferences is to rely on the authors' commitment to integrate the updates during the camera-ready phase. > the whole paper is far from ready in terms of readability We have put a lot of work into ensuring the clarity of the paper, and we would gladly incorporate any additional specific feedback. > It seems like other reviewer (Reviewer 4sGe) also feel the experiments are not well established As we note in our response to Reviewer 4sGe, we believe there was some confusion regarding our methods. Reviewers that responded during the rebuttal period seem to be satisfied with the additional results we provided. We value your feedback and will address your remaining concerns in our final submission. We hope this allows you to reconsider your score.
Summary: This paper focuses on low-rank training techniques and introduce ReLoRA that uses low-rank updates to train a high-rank network (<=350M parameters). The main idea is to employ LoRA during training and "restart" it in order to artificially increase the rank, which is a nice idea. The difficulty remains in the optimization process due to the gradient after the "reset". ReLoRA performs a partial reset of the optimizer state during merge-and-reinit and sets the learning rate to 0 with a subsequent warmup. The proposed method is elegant and novel. It is a nice trick to increase artificially the rank of LoRA. Overall, it is unclear to me what we mean by "pre-training". From the text, it seems we are talking about fine-tuning large models with adapters. In the experiment section, the authors talk about training on C4 with a model similar to LLaMA, which indicates pre-training. However, ReLoRA is initialized from a full-rank training at 5k steps. My guess would be that the authors initialize randomly a model (e.g., BERT) and pre-train using an adapter approach. I am disappointed with the experiment section. While I understand the limitation in resource computation, training models <= 350MB is disappointing when playing in the league of training large neural networks (as suggested by the title) or comparing with LLaMA. Nowadays, 8 GPU days of compute allows to fine-tune a LLaMA or similar model (in 1 day), not to pre-train. In Table 2, it is missing the training time for each model to understand the benefits of ReLoRA over the control, full training, and LoRA. If the method is 2 times slower than Control, it does not necessarily makes sense to use it since the performance at 350M is less than 1ppl (which I expect to be even smaller with larger models). Evaluating LMs only on perplexity is not sufficient. Please compare pre-trained models on GLUE/etc (as commonly done for adapters/prompting papers) using fine-tuning and PEFT approaches. This will show that: - The base model after pre-training with ReLoRA is (maybe?) better than full-rank pre-training or standard pre-training - Higher performance can be obtained with ReLoRA during fine-tuning. missing references: [1] Wang et al. 2023, LEARNING TO GROW PRETRAINED MODELS FOR EFFICIENT TRANSFORMER TRAINING (ICLR) Strengths: - Nice trick to increase artificially the rank of LoRA - The proposed method is sound Weaknesses: - the paper writing and clarity must be improved - the experiment section is insufficient Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Your comments have helped clarify certain areas of the paper that we need to address. Here's our response to your points: ### Clarification on Pre-training vs. Fine-tuning: It seems there is some confusion regarding the distinction between pre-training and fine-tuning, which is important here. Pre-training involves training with a language modeling objective on large amounts of free text, rather than fine-tuning on a relatively small specific downstream task data. To the best of our knowledge, no prior work has applied parameter-efficient training methods to pre-training. We use the LLaMA architecture (a specific variation of transformer decoder), but our models are trained from scratch using the ReLoRA method. The ReLoRA training method includes a notably short warm start period, which is especially evident in our training cost estimates for a 1B model (see our reply to Reviewer 2, nJNf). Our models are trained from scratch using ReLoRa training, and there's no involvement of adapter techniques as Adapters [Houlsby et al., 2018] and LoRA [Hu et al., 2020] are conceptually different kinds of parameter-efficient fine-tuning methods (please refer to the survey by Lialin et al., [2023] for details). We appreciate your observation and will ensure this distinction is clear in the revised paper. ### Experiment Section & Model Size While we demonstrated the efficacy of our method at the scale of 350M parameters, we are optimistic about its scalability to larger networks as we found ReLoRA to be **more efficient in 250M and 350M** than in 70M and 130M networks (judged by the difference between full-rank and low-rank training). We recognize that our resource constraints limit us in some regards. However, we believe the review process should primarily focus on the soundness of the research rather than computational resources, and appreciate your understanding. ### Evaluation Metrics beyond Perplexity Although our primary focus was on showcasing computational efficiency during pre-training on a massive dataset, we agree that a comprehensive evaluation should encompass downstream tasks. To this end, we performed a **downstream evaluation on GLUE** tasks which you can find in the one-page PDF attached to the rebuttal. This additional evaluation reiterates that our method's performance is on par with traditional training, especially when considering computational savings. We appreciate the **reference suggestion** and will incorporate it into our related work section. We value your feedback and believe it will significantly improve our paper. Your observations have enabled us to present our findings more clearly, and we hope the revisions will address your concerns. We would be grateful for a reconsideration of our paper's score, as we feel our work introduces a valuable perspective to the deep learning community.
Summary: This paper introduces a novel approach, ReLoRA, for training large-scale neural networks. Recognizing the limitations of conventional low-rank matrix factorization (LoRA) in training high-performing transformer models, the authors propose ReLoRA that employs a high-rank network training through multiple low-rank updates. This new method uses a full-rank training warm start followed by a merge-and-reinit (restart) strategy, jagged learning rate scheduler, and partial optimizer resets, making it efficient particularly for large networks. The research finds that the efficiency of ReLoRA increases with the network size, positioning it as a potential candidate for efficient training of multi-billion-parameter networks. The paper's results suggest that low-rank training methods can potentially improve the efficiency of training large language models and provide valuable insights for deep learning theories. These insights could further our understanding of neural network trainability and their exceptional generalization capabilities in the overparameterized regime. Strengths: 1. Important work of applying LoRA in pre-train 2. Good reproducibility: Code is released for readers; Hyperparameter settings are available. But still some parameters are missing, see below. 3. Sophisticated design of ablation study Weaknesses: 1. What's the number of trainable parameters in the experiments? Rank r of LoRA is also not reported (or hard to find). I assume 60M, 130M, 250M, and 350M are the total number of parameters. 2. The perplexity reported for Control with 250M parameters appears to be an outlier, greater than that of 130M parameters. Please double-check. If it’s real, then the fluctuation range of perplexity could be too large for us to draw reliable conclusions with the results. 3. One important missing baseline is LoRA with warm-start, which is reported in the ablation study, but not in the main results of Table 2, as well as in Figure 3 and 4. As the gain of the main techs developed in the work for ReLoRA can only be seen when compared to the proper baseline of LoRA, unfair to compare with LoRA with no warm-start. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can authors also report the absolute amount of training resource saved (e.g. total GPU memory, total training wall time for each setting, instead of just 30% memory reduction and 52% training throughput)? Hard to see efficiency of low rank training without these values. Also curious to see how much more cost compared to LoRA with warm start, if there is any. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. Initial warm start is important/indispensable to reach a performance comparable to the full training. If one start from scratch, with no warm started checkpoint, they may still be limited by the computing resource to warm start. The proposed method may just save some time later after warm start, which is, however, not shown in the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your assessment of our work. We really appreciate your feedback and suggestions! Here's a response to your questions: ### Trainable Parameters and Rank 'r' of LoRA You're correct about the parameter counts. We provide the total number of parameters (60M, 130M, etc.) in the paper. To specify, the **trainable** parameters for each low-rank model are: * 60M: 42M * 130M: 72M * 250M: 99M * 350M: 125M We updated Table 2 to make this clear (see attached PDF). Regarding the rank of LoRA, we now state in the **"Architecture and training hyperparameters"** section that for all LoRA and ReLoRA experiments, we use a rank of r = 128 based on our preliminary experiments. ### Perplexity Discrepancy for Control with 250M Parameters We checked the results, and there was an error in the hyperparameters, specifically, the number of layers in the model and lr warmup. The updated value for the Control with 250M parameters is 25.43, which is now in-between 72M and 99M models (control for 130M and 250M), as expected. ### Warm-Start LoRA Baseline: We added the warm-start + LoRA baseline to Table 2 for each model size and observed small but consistent improvement of ReLoRA over this baseline. We also performed additional experiments with a smaller warm-start phase to further validate our findings. Specifically, when restricting the warm-start phase to just 2K steps for the 350M model, we observed a gap of more than 1.4 ppl point between LoRA (25.08) and ReLoRA (23.64). While the absolute performance of ReLoRA is lower compared to full-rank training in this context, this experiment shows that LoRA restarts positively impact model performance. For all of these results, see the attached PDF. ### Absolute Training Resources: To provide a clearer perspective on the efficiency of our method, we estimated the costs for training a 1B model on 20B tokens (~Chinchilla-optimal) using a 2x3090 setup. Both models use sequence length 512 and microbatch size of 4 examples. The total batch size is 1152 examples or ~600K tokens. **Regular Training:** * GPU Memory: 21Gb per GPU * Throughput: Estimated at approximately 4,500 tokens/second * Wall time: **1235 hours** on 2x3090 GPUs **ReLoRA:** * GPU Memory: 14.5Gb per GPU * Low-rank training throughput: ~9,200 tokens/second * Wall time: **762 hours** on 2x3090 GPUs * Breakdown: Warm start phase (based on 25% of total training steps) takes around 309 based on 4.5K tokens/second full-rank training throughput estimate above. The subsequent low-rank training phase accounts for the remaining 453 hours with an estimated throughput of 9.2 K tokens/second. A noteworthy aspect of ReLoRA is the reduced GPU memory requirement. This allows for an increased microbatch size during training, contributing to enhanced efficiency and throughput during the low-rank training phase. ### Cost Comparison with Warm-Start LoRA: The difference in costs between LoRA and ReLoRA is minimal. The main overhead in ReLoRA is resetting the optimizer states, which doesn't add significant time to the training (only a few seconds). We hope these updates address your concerns. Your feedback has been crucial in refining the paper, and we appreciate it. We hope the changes made address your concerns, and we humbly request a reconsideration of the paper's score. As this work presents the first proposal for using parameter-efficient methods for pre-training, we believe it opens a promising avenue for new research, and we would greatly value the opportunity to share it with the NeurIPS community. --- Rebuttal Comment 1.1: Comment: Thank authors for replying to all my questions and concerns. The results are now more reasonable and solid than in the initial version. Please integrate them in the final version. I'll also raise my rating.
Summary: The paper proposes an extension LoRA. The main insight in the paper is LoRa ca be initialization multiple times during training layers and this in the end will produce a high rank update. The authors show that it is quite challenging to re-intialize the layers mostly due to the internal initialization state of Adam. The authors then propose a pruning based technique to overcome this limitation when using Adam. The also propose a learning rate schedule which helps to avoid the divergence of the network. The authors show that this leads to better performance than Lora on the C4 datasets. Strengths: The paper proposes an interesting technique which might be useful. The experiments performed by the authors are reasonable and even with limited compute resources they clearly show a clear picture. The authors have used the C4 dataset. Which is quite reasonable for a start Weaknesses: * The methods proposed is more of set of approaches to avoid divergence due to Adam. * The techniques used a bit of trial and error, is there more principal. * Would have been better to see what is the downstream performance of ReLoRa transform. * Can you provide a better understanding why the second to last row in Table 3 performs very similar to No attemto Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please look at the weakness section. I am happy to bump up my score to a weak accept if you can do the following two things - 1. Perform evaluation on downstream task atleast one or two datasets 2. Why is performance so close in Table 3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the paper is well written. I understand the lack of compute to perform large experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough review and the feedback provided. Thank you! To address your questions, we’ve performed the following additional experiments: ### 1. Performance Similarity in Table 3: You pointed out the similarity in performance between the second to last row in Table 3 and our baseline. To dig deeper into that, we performed two additional sets of experiments: **1.1. Warmup + LoRA baselines for all runs in Table 2:** After hparam tuning of both LoRA and ReLoRA, we confirm a small but consistent improvement of ReLoRA over warmstart+LoRA. **1.2. Smaller warm start experiments:** We conducted experiments with both LoRA and ReLoRA for the 350M model, but restricted the warm-start phase to 2K steps. The results show a performance gain with ReLoRA over LoRA by 1.4 ppl points (ppl 23.64 vs 25.08). While the absolute performance of ReLoRA is lower compared to full-rank training in this context, these experiments validate our initial hypothesis that LoRA restarts positively impact performance. These experiments show that ReLoRA offers consistent improvements over warmed-up LoRA, shedding light on the distinction between the two methodologies. ### 2. Downstream Performance Evaluation: Based on your recommendation, we attached supervised fine-tuning results for full-rank, ReLoRA, and non-pretrained 350M models. ReLoRA shows downstream performance on par with full training, beating the model finetuned from random initialization baseline. Please note that absolute performance is lower than e.g. BERT, since BERT is trained 128B tokens compared to 7B in our experiments. For all of these results, see the single-page PDF attached. We believe our results demonstrate the potential of ReLoRA as a next generation method for model training, and specifically, that parameter-efficient methods can be applied at the resource-heavy pre-training stage. We hope that the additional experiments and clarifications address your concerns and make a compelling case for this paper. Thank you once again for your insights, and we look forward to the committee's feedback.
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewers dedicated to reviewing our paper. Your feedback, ranging from detailed concerns to constructive suggestions, has been instrumental in guiding us to refine and clarify our work. Following the NeurIPS rebuttal policy, we attach a single-page PDF with additional experiments. It includes one figure and two tables. * Table 2 is updated with warm start + LoRA baseline for every model and also provides the number of trainable parameters. * Figure 5 demonstrates a significant difference between LoRA and ReLoRA warm started only after 2K steps. * Table 4 provides a downstream evaluation on several GLUE tasks. Given the clarifications provided in answers to reviewers individually and the additional experiments presented in the attached one-page PDF, we kindly request a re-evaluation of our manuscript's scores. Thank you for the time and effort dedicated to reviewing our work. Pdf: /pdf/25d5e4b1bb9cfba9081ebd1abb79c4882097954b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transforming to Yoked Neural Networks to Improve ANN Structure
Reject
Summary: The paper introduces a new method called YNN that transforms traditional ANN structures into yoked neural networks, promoting information transfer and improving performance. The authors analyze the existing structural bias of ANN and propose a model YNN to efficiently eliminate such structural bias. In their model, nodes carry out aggregation and transformation of features, and edges determine the flow of information. They further impose auxiliary sparsity constraints to the distribution of connectedness, which promotes the learned structure to focus on critical connections. Finally, based on the optimized structure, they also design a small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. The learning process is compatible with the existing networks and different tasks. The obtained quantitative experimental results reflect that the learned connectivity is superior to the traditional NN structure. Strengths: 1. YNN promotes information transfer significantly which helps in improving the performance of the method. 2. The authors propose a model that efficiently eliminates structural bias in ANN. 3. The authors design a small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. Weaknesses: 1. There is a lack of ablation study, e.g., comparing the model performance using different clique size/number of cuts 2. The equations presented in Section 3.3 are excessively complex and challenging to comprehend. The authors have employed "W" and "w" for too many different variables in their notations, leading to confusion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors seem to be putting forth a novel technique for calibrating hidden states within each layer, which is delineated through the solution of a system of equations, as presented at the bottom of page 5. One might question how this proposed approach contrasts with conventional techniques such as Graph Neural Networks (GNN) or self-attention mechanisms. Drawing a parallel, it appears that the essence of this newly proposed method bears resemblances to the fundamental principles of GNNs and self-attention methods, notably the propagation of information from one node to its neighboring nodes. Therefore, a comprehensive analysis of the contrasts and similarities of these approaches would be beneficial for a deeper understanding of this proposition. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious comments. 1>Please refer to the pdf file of Author Rebuttal by Authors. I have organized the contribution of the work seriously. I hope it can answer some of your questions. All information is in the picture. If the image is small, please enlarge it. 2>GNN or self-attention mainly focuses on better organizing the input data to get benefits, e.g. node and edge embedding for GNN and QKV vectors for self-attention. Essentially, by better organizing the input data, they can better handle graph data or sequence data, as self-attention can mine the key information of the sequence. On the other hand, YNN is a generation of the structure of NN from tree to cyclic graph and its benefits can be referred to in the pdf. 3>The most important forward and backward processes have been carefully organized in the rebuttal pdf of Author Rebuttal by Authors. 4>Thanks very much. --- Rebuttal Comment 1.1: Title: an important message Comment: Dear NeurIPS reviewer, I am writing to draw your utmost attention to our piece of work. At the heart of our innovation lies a critical reimagining of traditional NNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of NNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of NN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling NN structures closer to the remarkable efficiency of their natural counterparts. For a succinct overview of the in-depth details, I encourage you to review the attached one-page PDF in my rebuttal attachment. This document encapsulates the essence of our groundbreaking contribution and underscores the urgency of its consideration. Your attention and support at this juncture are invaluable, and I extend my heartfelt gratitude for your consideration. Warm regards, Authors
Summary: This paper proposed a module called YNN that could exchange the information of the neurons within the same layer. The proposed module can be combined with MLP. The experiments on several small-scale datasets show that their method achieves good performance compared to previous networks. Strengths: - The motivation of this paper is valid and interesting. Weaknesses: - I don't think a fundamental difference between the proposed YNN and graph neural networks. This method can be a special case by assigning a fully connect adjacent matrix to a GNN. - The evaluation is only conducted on small-scale datasets. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - How deep can this YNN be? Typically this kind of network will suffer the information diminishing problem since the feature will be over-smoothed with depth increasing. - Can this network generalize to large-scale datasets such as imagenet? If so, could the author show some experimental results on it. - What's the fundamental difference between YNN and GNN? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Please refer to the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious comments. 1>Please refer to the pdf file of Author Rebuttal by Authors. I have organized the contribution of the work seriously. I hope it can answer some of your questions. All information is in the picture. If the image is small, please enlarge it. 2>If the model is too deep, as we use the sigmoid activation function, it will suffer from the information diminishing problem. However, typically, we can turn to the relu activation function to alleviate the problem. 3> According to section 3.6, if we cut to small sub-graphs. Our model model can apply on large data sets such as imagenet. 4>GNN mainly deals with graph data as input. GNN focuses on better organizing the input data to get benefits, e.g. node and edge embedding for GNN, Essentially, by better organizing the input data, they can better handle graph data. On the other hand, YNN is a generation of the structure of NN from tree to cyclic graph and its benefits can be referred to as the pdf of Author Rebuttal by Authors. 5>Thanks very much. --- Rebuttal Comment 1.1: Title: an important message Comment: Dear NeurIPS reviewer, I am writing to draw your utmost attention to our piece of work. At the heart of our innovation lies a critical reimagining of traditional NNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of NNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of NN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling NN structures closer to the remarkable efficiency of their natural counterparts. For a succinct overview of the in-depth details, I encourage you to review the attached one-page PDF in my rebuttal attachment. This document encapsulates the essence of our groundbreaking contribution and underscores the urgency of its consideration. Your attention and support at this juncture are invaluable, and I extend my heartfelt gratitude for your consideration. Warm regards, Authors
Summary: This paper propose a 'yoked' neural architecture where neurons at the same level are bidirectionally linked. They claim that optimizing this complete graph is superior to current deep neural network architectures that impose a structural bias due to the transfer of knowledge in a way that prevents structural bias. Strengths: * the proposed optimization changes is simple in that it is similar to other methods like DARTS that grow neural networks and regulate their connections, assigning weights to them using ANN optimization algorithms with regularization term * it is clear how the forward propagation is done with the addition of the clique nodes that are computed in addition to the regular precursor nodes at each layer Weaknesses: * some of the terminology and abbreviations need to be defined / explained in the first appearance in the intro paragraphs (e.g. 'yoked', ANN, and DAG) * the method of optimization doesn't seem particularly novel, employing both an L1 and L2 term to search for the best architecture. * the figure 2 presented does not particularly show much about the method or its justification * Is it possible to approximate the non-differentiable minimum cut algorithm and absorb it into the training procedure? This would be similar to progressive training methods like http://proceedings.mlr.press/v119/evci20a/evci20a.pdf and other related works) * please proofread for more typos and such (e.g. 'mata' on l204, other grammatical errors) * results are on quite toy problems and training with regularization is not necessarily yielding the best results Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: * Can the authors comment on the tradeoff between running more gradient updates on a non-yoked network versus the potential representation learning benefits from a yoked version? * Is the influence of information flow between nodes of the same level not potentially reflected intrinsically in the next leading level? * How does this method compare to other progressing growing methods like the lottery ticket methods or pruning networks? * Why is the 25 nodes table 3 DAG result so different from the rest? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: Authors do not discuss limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious comments. 1> Please refer to the pdf file of Author Rebuttal by Authors. I have organized the contribution of the work seriously. I hope it can answer some of your questions. All information is in the picture. If the image is small, please enlarge it. 2>I n our model, the nodes in the same level collaborate to function together neurologically. And the benefits would be passed intrinsically to the next leading level. 3> 35 nodes Table 3 DAG result should be 0.3519. Sorry for that. 4> According to section 3.6, if we cut to small sub-graphs. Our model can apply to large data sets such as imagenet. Although the graph cut method is not novel. But our main contribution is to design a new structure as introduced in the pdf file of Author Rebuttal by Authors. The graph cut method is to help the structure better application。 4> According to section 3.6, if we cut to small sub-graphs. Our model model can apply on large data sets such as imagenet. 5> Thanks very much. I will revise them carefully. --- Rebuttal Comment 1.1: Title: an important message Comment: Dear NeurIPS reviewer, I am writing to draw your utmost attention to our piece of work. At the heart of our innovation lies a critical reimagining of traditional NNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of NNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of NN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling NN structures closer to the remarkable efficiency of their natural counterparts. For a succinct overview of the in-depth details, I encourage you to review the attached one-page PDF in my rebuttal attachment. This document encapsulates the essence of our groundbreaking contribution and underscores the urgency of its consideration. Your attention and support at this juncture are invaluable, and I extend my heartfelt gratitude for your consideration. Warm regards, Authors
Summary: The paper proposes Yoked Neural Networks (YNN) - an extension of neural networks, which, when calculating the value of a node, in addition to the information from the previous layer of the network, uses information from the nodes on the same layer (i.e. it "yokes" nodes from the same layer together). Strengths: The paper describes the approach well, it is clear how it works. Code has been provided as an attachment, so that it should be reproducible (I have not ran the code or looked at it carefully). Weaknesses: (Details about the mentioned weaknesses are given per line, in the field "Questions".) The paper would benefit from describing in more details the contributions of the proposed method. Particularly, across the paper, some strong statements have been used, but they have not been motivated with evidence. Overall, the idea and the benefits of using the method needs to be better motivated. The description of the experiments is not very clear. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Line 2: It is not clear why "the connectivity of a tree is not sufficient to characterize a neural network", Since this is one of the most important motivations for the work, it would be beneficial to give details about it. Line 12: It is not obvious why the method improves, maybe instead of using the word "obviously", some proof of the statement could be given - how does the method improve over existing methods, by how much? Line 12-13: "YNN can imitate neural networks much better" - Which properties do they imitate better? What does better mean in this context? Line 43: "limiting the signal’s capability for free transmission" - What does this mean? What is free transmission and why is it limited? Line 45-46: "significant drawbacks" - Can you give a citation to prove this claim, what kind of drawbacks? In this paragraph, it is not really clear what limitations of ANNs you are addressing, can you please be more specific. What "substantial defects" do you refer to? Line 52: It is not obvious what "YOKE fashion" is, can you please provide a short explanation? Line 54: Instead of using "remarkable improvements", could you please give details about what kind of improvements you have obtained - in what measures, and by how much. Line 59: "our method efficiently eliminates structural bias" - How does it eliminate it? Please define structural bias, how it is eliminated and what does "efficiently" mean. Again, please give details of what exactly is improved and how. Line 94: Please provide a citation for ResNet. Line 139: "some researchers attempted to generalize this structure" - Please provide a citation. Line 143-144: "makes the transformation of information quite inadequate", "makes the transformation of information quite inadequate." - What does inadequate mean in this context? What needs to be improved? Why is the structure inferior inferior, what kinds of properties are missing or need to be improved? Line 149: (very minor, typo) "is" is not necessary Line 156: It is not really clear what properties of a neural network have inspired the YNNs. Please give more details and some specifics. Line 163: It is not clear what you mean by "greatly enhance the characterization". Please explain. Line 181: Is the "meta value" the value as it would be from a standard NN pass, without yoking the same layer nodes? And the "real value" is the value from the forward pass, together with the values calculated from the other nodes at the same level? Line 248: "NE algorithm" should be explained and a citation given. Line 249: (minor) The reference to "Definition 2" is not linking to it. Line 259-261 - The last sentence of the paragraph is hard to follow. Questions about the experiments: It is not very clear how the experimental setup is done. Are you doing classification, and measuring classification accuracy on the three mentioned datasets? Please give more information about the task(s) you are addressing and what you are measuring. Line 265: It is not clear what the compared models are. Please give details of the structure of the used "traditional NN", "SAE" and "generalized traditional NN". Also, it is not clear what is the structure of the neural network that you are using for your approach. Is it based on a feed-forward neural network, but with yoked nodes? For all the compared models, please give details about network structure, number of layers, training method, training details. (Maybe this can be seen in the code, but it would greatly increase the understanding of the paper if these details are included in the experiments section.) In the tables with results, what is the meaning of the number of nodes in the columns? Is this the number of nodes in one NN layer? How many layers are there? Line 267 and below: Could you provide a reference to the used CUTG dataset? Can you please give a little more description: is this an existing dataset which you annotated further? Or did you collect this data - if so, can ou please give more details of how it was collected and annotated? Can you please give citations for "the second dataset" and "Connect-4"? Line 280: Please give a reference to the tables with results for more clarity. Line 281: Please give a little more detailed comment on the results - how much better are your results, in which cases. What conclusions can be derived from these results? Line 290: How do you optimize with L1 and L2 regularization? Please give some details. It is not obvious how this optimization is performed and what is the result from it. Line 295: What does "effective" mean in this context - better performance on the classification task, or execution time? Line 307-308: ". Our method eliminates structure bias efficiently." - It is not clear how this is done. The details of what exactly you mean by "structure bias", how your method eliminates it, and how this improves the model needs to be more clearly described in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious comments. 1>Please refer to the pdf file of Author Rebuttal by Authors. I have organized the contribution of the work seriously. I hope it can answer some of your questions. All information is in the picture. If the image is small, please enlarge it. 2>A link for the graph cut algorithm https://link.springer.com/chapter/10.1007/978-3-030-95391-1_42#Distributed%20Ne 3>For the experiments part, the introduction of data sets can be found in section 4.1 and they are all from UCI data sets. They are all classification tasks and we compare the error to measure the performance of our model and the L1 L2 regularization. From Table 1, Table 2, and Table 3, we can see that our model reduces the error significantly. 4>The compared model SAE and DAG are also introduced in section 4.1 and the reference section. 5>We have uploaded the code to make sure the experiments can be reproducible. 6>Thanks very much. I will revise them carefully. --- Rebuttal Comment 1.1: Comment: I read all the reviews, the authors' responses and the attached PDF. I still think the paper needs some rewriting, in order to better explain what several of the reviewers find confusing. Mainly the following directions still need to be improved: 1) the motivation of the approach; 2) comparison with similar approaches (GNNs and self-attention, as suggested by other reviewers); 3) introduction of concepts used in the paper (see details, for example, in section Questions in my review and in the other reviews); 4) better motivation of why the approach is different and valuable. The paper contains several statements that are not backed with evidence (I have tried to address them per line in the Questions section of my review). 5) improved description of the experiments - details about the compared baselines, information about the compared neural networks - structure, number of layers etc., including description of the compared YNNs. I appreciate that the code is available and it will be a valuable addition, but the paper should be very clear of what experiments were executed and to underline how the proposed approach is better. --- Reply to Comment 1.1.1: Comment: Thank you very much for your precious comments. We will carefully revise our paper according to them. On the other hand, we believe that our work is very meaningful. At the heart of our innovation lies a critical reimagining of traditional NNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of NNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of NN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling NN structures closer to the remarkable efficiency of their natural counterparts. GNN or self-attention mainly focuses on better organizing the input data to get benefits, e.g. node and edge embedding for GNN and QKV vectors for self-attention. Essentially, by better organizing the input data, they can better handle graph data or sequence data, as self-attention can mine the key information of the sequence. On the other hand, YNN is a generation of the structure of NN from tree to cyclic graph and its benefits can be referred as before. They are fundamentally different. For the experiments part, the introduction of data sets can be found in section 4.1 and they are all from UCI data sets. They are all classification tasks and we compare the error to measure the performance of our model and the L1 L2 regularization. From Table 1, Table 2, and Table 3, we can see that our model reduces the error significantly. The compared model SAE and DAG are also introduced in section 4.1 and the reference section. However, we will carefully revise our paper to describe more details of the experiments as well the related concepts for the paper.
Rebuttal 1: Rebuttal: Pleat refer to the pdf file. I have organized the contribution of the work seriously. I hope it can answer some of the questions. All information is in the picture. If the image is small, please enlarge it. Pdf: /pdf/61536891bd8db60b040e0517cd622d7e0869af19.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a novel neural network model that exploits a connection between the nodes of a layer. The authors’ goal is to develop a model that overcomes the structural bias posed by the classical layer structure of the NN. To do this they propose to consider the model as a bidirectional complete graph for the nodes of the same level and to define for each layer a clique. The authors then test the proposed architecture and compare it with the traditional NN model considering 3 datasets. Strengths: The paper discusses an interesting problem and proposes a novel methodology that seems promising. Weaknesses: Overall, the paper is challenging to read, and at times, the concepts being discussed are not adequately introduced, causing difficulty for the reader to comprehend the discussion's progression. Additionally, several critical concepts are unclearly defined. In the introduction, the authors discuss the neural module without explaining what it is. Even the concept of “yoke”, which is central to the discussion, is not adequately introduced and explained in the context of neural networks. In the introduction the authors also discuss the impact of the sparsity constraints, also in this case this concept has to be defined and explained to the reader. In the list of contributions, point 4 says that the authors designed a regularization-based optimization, which at this point of the paper is very difficult to understand. Point 5 of the same list discusses the problem of computational complexity, which also in this case is not discussed before. Another issue is the experimental campaign where the experimental setting and the metric used to perform comparison are not explained. Indeed the authors use the “variety of nodes” as a metric, but they do not explain why it has to be significant to show the advantage of the proposed approach. From the tables, 1,2,3 seems the authors fix the number of nodes for the various architectures and train them. In general to me, it does not seem a fair way to compare the models, mainly for 2 reasons: (i) the architectures of the baselines have to be validated (in particular in terms of the number of neurons, but also considering the other hyperparameters of the model and of the optimization algorithm) in order to find the most suitable setting for the task. (ii) the comparison has to consider the number of parameters of the model since the structure of the YNN will have many more weights than a standard model (fixing the number of neurons). Even a description of the setting of the three proposed approaches (YNN, YNN&L1 ,YNN&L2) would make it easier for the reader to understand the proposed results. Finally, a discussion about the computational burden and a comparison with standard NN is missing. The experimental evaluation and the discussion of the obtained results should be significantly improved and extended Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In Section 2 the first part discuss models that have an architecture that is slightly different than the standard ANN but seems none of them exploit the connection between neurons of the same level, thus I wonder how these cited models are placed in comparison to the YNN (theoretically and in terms of performance) because to me it is not clear if the benefit of the proposed architecture come from the differentiable structure or/and from the richest connection pattern between neurons. In line 103 the authors state that the proposed learning process method is consistent with DARTS. In my opinion, the authors should provide further elaboration on this point as it appears unclear to me. In general section 2 suffers from the same problem of the introduction, since it cites the proposed approach and algorithm, but who read the paper do not know anything about them at this point, therefore my suggestion is to move the entire section two after the explanation of the proposed model. In section 3, from lines 133 to 138 the authors discuss the similarity between the ANN structure and the tree structure, but in general, I find it not correct to state that a node in a NN is influenced only by its precursor node, since in a standard NN all the neurons on the previous layers influence the output of the current node. The authors should clarify this point to make it more clear how this description fits the typical NN architecture. In section 3.2 the authors use the concept of clique, which is a concept related to graph theory but not that common in ML in general, therefore in my opinion definition 1 should be extended to explain this concept a bit more in depth. Even the concept of “node” (that in the first part seems to be synonymous with “neuron” ) should be defined more precisely. For what concerns the forward pass to me it is not clear how the authors solve the system proposed at the end of page 5 considering that the values of n_j^i, which are part of the summation of each row, are also the result of the other system equations. In the backward pass, in particular, in eq. 4, the authors define how they compute gradients for each level. Since in this equation appears f^{-1} I am wondering if this method works only with an invertible activation function (that could be a strong constraint) In section 3.5 it is not discussed why the use of L1 and L2 is important to perform structure optimization. Moreover, the meaning of w^i in eq.18 is not clear. previously they were defined as the bias of the real values of the nodes in the i-th level while here they are applied as a function. In section 3.6: missing definition (or reference to) NE algorithm. The authors also state that imposing auxiliary sparsity constraints to the distribution of connectedness optimization promotes the learned structure to focus on critical connections. This is a very interesting point but to me seems a theoretical or empirical proof is missing. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious comments. 1>Please refer to the pdf file of Author Rebuttal by Authors. I have organized the contribution of the work seriously. I hope it can answer some of your questions. All information is in the picture. If the image is small, please enlarge it. 2>The most important forward and backward processes have been carefully organized in the rebuttal pdf of Author Rebuttal by Authors. The “w” in L1 and L2 functions should be “W”. Sorry for that. 3>Our backward process is compatible with the existing activation function. 4>A link for the graph cut algorithm https://link.springer.com/chapter/10.1007/978-3-030-95391-1_42#Distributed%20Ne 5>The hyperparameters have been well-tuned in our experiments including the compared models. Considering that hyperparameters are too complicated, we show the results of them on different nodes or structures which is crucial for our contribution. We also present the benefits of our YNN which can also be referred to as the pdf file of Author Rebuttal by Authors. 6>Thanks very much. I will revise them carefully. --- Rebuttal Comment 1.1: Title: an important message Comment: Dear NeurIPS reviewer, I am writing to draw your utmost attention to our piece of work. At the heart of our innovation lies a critical reimagining of traditional NNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of NNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of NN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling NN structures closer to the remarkable efficiency of their natural counterparts. For a succinct overview of the in-depth details, I encourage you to review the attached one-page PDF in my rebuttal attachment. This document encapsulates the essence of our groundbreaking contribution and underscores the urgency of its consideration. Your attention and support at this juncture are invaluable, and I extend my heartfelt gratitude for your consideration. Warm regards, Authors
null
null
null
null
null
null
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Accept (spotlight)
Summary: The Kiki-bouba effect is a well studied phenomena in humans to consistently associate sharp and smooth objects with certain phonetics. This paper explores whether such an effect is present within image and language models. Specifically it looks at Stable Diffusion (a text to image generative model) and CLIP (a discriminative model for text and images) to see whether these models form representations of made up "Kiki" and "Bouba" type words in similarly consistent ways as humans do. The paper is well written, and concludes that these models do indeed demonstrate this same tendency. Strengths: The paper is well written. The methods used to investigate whether this effect is present in these different models appear valid. The observed effect is strong. It's interesting that these models demonstrate this as strongly as they do given they have no auditory dimension, but then again they are models of language. Nice observation though! Weaknesses: It would have been nice to make use of a model that doesn't have have any exposure to images. For example taking BERT and seeing if the pseudo words also get separated with the same tendency or not? Something analogous to demonstrating kiki-bouba in blind people, which to my lay understanding I believe has been done but only to a limited extent compared to other validations of the effect in various human populations. I'm unsure if this paper opens up any further significant study, or unlocks anything in an engineering sense, so I think its impact is likely capped. Minor: citation on line 174 would be good, rather than just "some prior works". Nitpick: Line 24 -- "bœuf" is often translated as "beef" I thought, in which case there is a lot of overlap in sounds. This link is no doubt cultural, rather than saying anything about objects having similar sounding terms across languages. Maybe there is a better example to use though? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Did you try other ways of scoring? Does distance from centroids of the sharp/smooth groups also display the same effect for example? How important to observing this effect are the details in how scoring is done? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do address the fact that these models don't have the same full set of inputs as humans do. They also note that nothing is discussed about why these models display these traits. These are important to highlight, but the paper has value as an observation that the effect is present. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments. We will clarify wording issues and add requested citations in a revised version. Regarding unimodal (text-only) models, we refer to our investigation of these models in the supplementary material (supp Sec 3.1), where we test encoder-only models (similar to BERT) that have been trained on text alone. Our findings show mixed results, with possible sound symbolic effects evident but seemingly weaker than those shown by multimodal models; we leave a thorough investigation of the relative contributions of text and image data to future work (supp L124). Regarding scoring methods – see the response to reviewer kkX7 for an additional zero-shot scoring method. Regarding the question about cluster centroids for scoring, our phonetic scoring method measures distance along the axis which separates the two centroids of pseudowords clusters, effectively comparing relative distances to the centroids of each cluster. Regarding the effect of different scoring methods, we found sound symbolic effects across probing methods (geometric and phonetic scoring, and the additional method proposed in our response to reviewer kkX7). We also found scoring to be robust to the choice of prompt text used in probes (supp Sec 3.2). Regarding the French word “boeuf”, we clarify that this is being used on L23 as a direct citation from Saussure, who used this example verbatim in the source being cited. Additionally, “beef” is actually a loanword in English from Old French [1], making the similarity non-coincidental. Nevertheless, we can replace this with an unambiguous example to avoid confusion. [1] D. Harper. “Etymology of beef.” Online Etymology Dictionary.
Summary: The work's goal is to study whether sound symbolism is reflected in vision-and-language (VL) models like CLIP and Stable Diffusion (SD). The work proposed a method called **zero-shot knowledge probing**, and verified a sound symbolism phenomenon like **kiki-bouba effect** by evaluating the outputs from CLIP and SD when inputs were a set of predefined pseudowords, adjectives, and nouns, which are related to sharpness or roundness. In short, the study provides a simple paradigm for investigating sound symbolism in VL models from VL latent space and verifies the existence of sound symbolism in VL models. Strengths: 1. The work gives a clear explanation of motivation, methodology, and experimental results. 2. The work discusses the experimental results thoroughly and with some insights. 3. The work provides rich supplementary material, which answers a few questions raised when I was reading the main paper. 4. The work is well organized and tells a beautiful story with linguistic and cognitive backgrounds. In conclusion, I believe the work is solid and insightful. Weaknesses: 1. The work only studies open-sourced VL models CLIP and SD, which may relate to narrow computational resources. But studying models with different backbones like GAN or ViT would be more convincing. 2. The **zero-shot knowledge probing** is designed a bit intuitively without comparison with other baseline methods, such as training an individual classifier for text-text and text-image scores or human annotators. 3. The work did not discuss the inner results sound symbolism phenomenon in perspectives of the mechanism of machine learning but attributed it to the inherent knowledge in the models. I think the reason for the phenomenon might be a result of tokenization, i.e. a letter-level tokenized VL model / language model might produce different results. 4. The work only studies the phenomenon in English, given L41 saying the phenomenon is universal to different languages. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Could you give a result on how exactly $|w_{adj}|$ and $|v_{pw}|$ are in L206 and L217 since the distribution of the adjectives and pseudowords in latent space is quite a blur? 2. The caption of Table 2 is not clear, how is the list of words sorted and what does the `POS` line mean? 3. Why do you choose SD to map texts to images, rather than using retrieval methods with CLIP on the LAION-2B dataset? They have the exactly same inherent knowledge. 4. What are the probable applications of this study? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Although the work emphasizes that its study should be taken in context (L309), the inner reasons why VL models have the same sound symbolism effect as humans are not investigated. It should be addressed in future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments. We address the reviewer’s question regarding tokenization in our global response (item C), and the concern about multilingual results in our global response (item B) and in the response to reviewer NTGx. To address concerns about the generality of our method across different model architectures, we have also tested the SOTA text-to-image models DeepFloyd-IF [1] and Kadinsky [2]. We find significant sound symbolic effects in both models; please see Table 1 in the PDF attached to our global response for full results when evaluating on our pseudowords. For example, DeepFloyd-IF shows geometric score AUC 0.63 and phonetic score AUC 0.98, significantly higher than 0.50 expected from random chance. We note that these models differ in underlying architecture, training objectives, and with respect to training data. For example, DeepFloyd-IF uses a T5 transformer text encoder (rather than the CLIP text backbone used by Stable Diffusion) and uses a pixel-level denoising U-Net (rather than the latent diffusion architecture and objective used by Stable Diffusion). We will include these results and discussion in a revised version. Regarding probing methods, we focus on zero-shot knowledge probing rather than trained text or image classifiers because we are interested in the inherent knowledge of our models and not the dynamics of training on new data, (L183-184), consistent with prior work on probing models’ intrinsic knowledge [3]. To provide an additional zero-shot method to complement our results, we also analyze image generations using image-level geometric properties. In particular, we estimate the “sharpness” or “roundness” of an image by using a Harris corner detector, which looks for points of sharp discontinuity representing geometric corners. We estimate the number of corners in generated images and find a significant difference between images generated from the two pseudoword classes, with “sharp” pseudoword images having significantly more corners on average than “round” pseudoword images, confirming that the former are visually more sharp than the latter on average. We will include these results in a revised version. We first wish to clarify the wording and terminology asked about in questions 1 and 2. $\left|w_{adj}\right|$ as defined on L206 is a unit vector pointing in the direction in CLIP’s latent space which best separates between the two sets of adjectives from L205 (inserted into the prompts from L187). $\left|v_{pw}\right|$ as defined on L217 is similarly a unit vector pointing in the direction in CLIP’s latent space which best separates between the two sets of pseudowords (L176, inserted into the prompts from L187). In Table 2, POS stands for “Part Of Speech” (in our case, noun or adjective) and the words are sorted by phonetic score $\phi_{\left<w\right>}$. We will include these clarifications in a revised version. Regarding the use of Stable Diffusion (SD) for text-to-image generation versus image retrieval, we believe that there is in fact an important difference in the intrinsic knowledge of SD versus CLIP-guided image retrieval on LAION. While SD and (Open)CLIP were both trained on LAION, the intrinsic knowledge learned by SD’s denoising goal (which requires understanding local regions within images) is likely to be different from knowledge gained from image-level retrieval. Indeed, our quantitative and qualitative results demonstrate that CLIP and SD do not behave identically with respect to intrinsic knowledge; with SD) showing stronger sound symbolic effects (e.g. L299). Our primary motivation for including SD in our tests is to investigate the knowledge learned by a popular generative model, in addition to probing the discriminative model CLIP directly. Regarding applications of our work, we see value in understanding how these models “interpret and respond to language” (L335); in general, V&L models are being used as black boxes without a full understanding of how they understand the visual semantics of language, and we believe that probing them to understand what they have learned may be relevant to model interpretability. We also believe that our findings could have applications to cognitive science and linguistics (L339-344), where sound symbolism has long been a topic of interest and debate; computational methods could be used to provide more evidence for the presence of sound symbolism and to understand its cognitive basis. We will make these points more explicit in a revised version. Finally, we wish to clarify the wording and terminology asked about in questions 1 and 2. $\left|w_{adj}\right|$ as defined on L206 is a unit vector pointing in the direction in CLIP’s latent space which best separates between the two sets of adjectives from L205 (inserted into the prompts from L187). $\left|v_{pw}\right|$ as defined on L217 is similarly a unit vector pointing in the direction in CLIP’s latent space which best separates between the two sets of pseudowords (L176, inserted into the prompts from L187). In Table 2, POS stands for “Part Of Speech” (in our case, noun or adjective) and the words are sorted by phonetic score $\phi_{\left<w\right>}$. We will include these clarifications in a revised version. [1] DeepFloyd/IF-I-M-v1.0 on Hugging Face Model Hub [2] kandinsky-community/kandinsky-2-2-decoder on Hugging Face Model Hub [3] Petroni et al. “Language Models as Knowledge Bases?” EMNLP 2019 --- Rebuttal Comment 1.1: Comment: Thanks for your response. > the concern about multilingual results in our global response (item B) It is a reasonable reply. It will be better if the linguistic feature of different languages is considered in the additional experiments. > We note that these models differ in underlying architecture, training objectives, and with respect to training data. "training objectives": I do not think their training objectives differ a lot. It might still be worth experimenting with GAN-style models. > While SD and (Open)CLIP were both trained on LAION, the intrinsic knowledge learned by SD’s denoising goal (which requires understanding local regions within images) is likely to be different from knowledge gained from image-level retrieval. From this perspective, you should define "intrinsic knowledge" more clearly that it is the knowledge learned by denoising training objective of SD in the revised paper. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and additional comments. Regarding linguistic features of languages tested, we will provide additional details, such as the language families of each language tested (Hungarian - Uralic/Ugric, Indonesian - Austronesian/Malayo-Polynesian, Finnish - Uralic/Finnic, Lithuanian - Indo-European/Baltic) and explaining how these differ from one another and from English, in a revised version. Regarding GAN-style models, we have also experimented with GALIP [1], a GAN-based text-to-image generation model. Using GALIP pretrained on the CC12M dataset with our pseudoword methodology, we see significant sound symbolic associations via our metrics and qualitatively - for example, phonetic score AUC 0.62 and geometric score AUC 0.98, significantly higher than 0.50 expected by chance for each. We will include these results in a revised version. Regarding intrinsic knowledge, we will clarify this wording in a revised version. [1] Tao et al. GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis. CVPR 2023
Summary: The authors present an investigation of the phenomenon of ‘sound symbolism’ in a pair of cutting edge Vision-and-Language Models. Sound symbolism is an intriguing phenomenon whereby the meaning of a word can be in part traced back to the way the word sounds. It is an important phenomenon in psychology and linguistics because it challenges a strict view of the mapping between form and meaning as arbitrary (stated most famously by Saussure). The authors probe CLIP and Stable Diffusion by creating a small dataset of pseudowords that were constructed to reflect phonetic features that generally map onto “sharp” vs. “round” speech sound categories. They project the embedding vectors in 1024-dimensional CLIP space onto a one-dimensional semantic dimension of interest, defined by two sets of antonym shape adjectives (one set corresponding to synonyms of ‘round’ and the other corresponding to synonyms of ‘sharp’). They also define a phonetic score to measure phonetic/graphemic associations of the shape adjectives with the pseudowords. They analyze the models using these geometric measures, as well as with a human evaluation where participants are asked to select which of two Stable Diffusion-generated images is best described by a pseudoword. Across these evaluations, they find relatively strong evidence for the presence of sound symbolism in these models. Strengths: One of the strengths of this paper is that it demonstrates how ideas from Cogntive Science can drive investigations into modern AI models. The phenomenon of sound symbolism is particularly interesting both for its intuitive appeal and its relation to foundational ideas in Cognitive Science. The paper was well written and largely a pleasure to read. The methods were overall well described and I found the evidence relatively convincing. Weaknesses: The weakest part of the paper for me were the motivations, hypotheses, and implications. Yes, Vision&Language Models are increasingly powerful and increasingly deployed, but why are they interesting objects of study for Cognitive Science? We are told that “these methods could provide new insights into the classic questions of what aspects of sound are tied to meaning”, but not explained how this might happen or what this could look like. As well, the conclusion about “cultural universality” seems like a very difficult and non-obvious question that these methods would provide insight into. And the flip-side, why is sound symbolism an important phenomenon to investigate in these powerful and deployable models? One specific weakness is that there were no hypotheses presented for the models, and you could imagine developing the hypotheses based on how these models were trained. I think there’s a lot to say about this, but all we got one sentence buried in the results section that mentioned that the models primarily saw valid English text during training and didn’t have access to the actual sound of the words. The second major weakness for me was the description and analysis of the User Study. The authors write “Over two hundred volunteers participated in this study”. The use of the word “volunteers” here suggests that the participants were not paid, which I hope is not the case. Who are these participants? How were they recruited? Was there any ethical review of this study? How many participants exactly were recruited (this is in the supplement, but should be presented in the main text)? On the analysis side, the use of a binomial test is not appropriate as the individual data points are not i.i.d. Participants gave multiple responses and, presumably, multiple items (pairs of images) were rated by multiple participants. Thus, something akin to a mixed-effects (multi-level / hierarchical) logistic regression model would be more appropriate to account for the participant- and item-wise variability, and provide a much more realistic analyses of the results (see Gelman & Hill, 2006 for background). I found the Phonetic Scores in 3.2 the most difficult to understand and still am not completely sure I get it. From the math, I would have thought this score was something akin to a control measurement, and a way to contextualize the Geometric scores, but the authors seem to have a different perspective, which I didn’t fully get. Finally, Figures 2 & 3 took up a lot of space for not that much information. They could easily be condensed to make room for some of the context in the response. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My questions are the same as the weaknesses. I’m not sure all of questions 1-3 need to be answered, but at least 2 of them do. 1. How can these results inform theories in Cognitive Science? 2. What hypotheses can be derived from these models a priori about sound symbolism, presumably by thinking about their training data? 3. Why is sound symbolism important for evaluating sophisticated, deployable AI systems? 4. Can you unpack the Phonetic Scores in a little more detail, and explain their relevance? 5. Please provide more details about the User Study and analyze the results in a more appropriate manner. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations were okay but could be improved by thinking more about the societal impact of these (human) biases getting incorporated into deployable generative models. I would bet some of the impacts of sound symbolism have been discussing in the psychological literature, and it would be important to mention them and discuss them in the context of these things getting baked into generative AI. Flag For Ethics Review: ['Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments. We address the reviewer’s concerns regarding our user study in our global response (item A). In particular, we reiterate that we believe the user study to not be critical to our results, and we are willing to remove or replace it upon request. Regarding motivation, we first note that other reviewers mention the motivation of our work as a strength, referring to it as an “interesting study that is well-motivated” (NTGx), having a “clear explanation of motivation… [and telling] a beautiful story” (kkX7). We explicitly address the reviewer’s questions next, emphasizing aspects of our work which we will better illustrate in a revised version. Regarding question 1 (importance of our results in the context of cognitive science), we believe our results have an impact in this domain for a number of reasons. Firstly, they suggest that sound symbolism is reflected in the linguistic training data of these models, and the presence of sound symbolism in language itself is moderately controversial (notably denied by Saussure, L22-23). Additionally, a series of psychological and linguistic works have investigated whether sound symbolism is reflected in the basic lexicon of English (see citations on L94); if V&L models learn sound symbolic association from valid English text (or other languages) in caption data, it might be due to this effect and could confirm the results of these studies with a new methodology. Finally, investigation into how V&L models infer sound symbolic associations from their training data could potentially shed light on how sound symbolism is learned during human language acquisition. While we do not claim to directly answer these questions, our results suggest that V&L models could provide valuable insights for investigating them. We will state these points more explicitly in a revised version. Regarding question 2 (a priori hypotheses regarding models) – when considering text containing nonsense words as input, one might expect models such as Stable Diffusion (SD) to produce purely random or nonsensical output. The competing hypothesis, which we believe to be non-obvious but for which we find strong evidence, is that such models have learned associations between the individual characters used in these nonsense words, and moreover in accordance with associations known from the psycholinguistic literature. Regarding question 3 (importance of our results in evaluating AI systems) as well as the reviewer’s comment about limitations and bias in generative models – in general, we believe that it is important to investigate what is learned by these powerful and deployable models and to provide interpretability to their behavior. In particular, we see value in understanding how V&L models may learn patterns from their training data, particularly patterns which require complex generalization beyond the captions seen during training. While we do not claim to determine the source of the observed effects, we provide evidence that these models have not simply memorized specific pseudowords seen during training (Sec 4.5). Such generalization could shed light on mechanisms leading to various associations and biases exhibited by V&L models, suggesting that they cannot only be explained by looking at individual training instances but possibly requiring an understanding of generalization over complex patterns on a large scale. We will incorporate such a discussion in a revised version. Regarding the statistical analysis of our survey results, we thank the reviewer for the suggestion and re-analyze the data using a mixed-effects logistic regression model. In this setting, we regress whether a question is answered correctly by a respondent (categorical response variable: 0 for incorrect answer, 1 for correct answer), where the question identity corresponds to a fixed effect and the respondent identity corresponds to a random effect. We fit this model using the “Lmer” implementation in the lme4 R package for modeling of linear mixed-effects models. For example, analyzing our survey of “kiki / bouba” generations* results in an intercept estimate corresponding to 89% overall success probability (p≈0.001), which isolates the overall success rate from the effects of individual respondents and questions. We will use this statistical analysis in a revised version (contingent on use of our survey data, per the discussion above). *(treating questions from survey versions 1 and 2 as separate question categories, and only using data from respondents who had not previously heard of the kiki-bouba effect) Regarding the phonetic scoring method (Sec 3.2), we clarify that the probe vector (L217) measures the direction in CLIP’s embedding space between the centroids of the two pseudoword classes (either as text or image embeddings, depending on the model being tested). We call this “phonetic scoring” because it is a semantic dimension determined by the phonetics (sound classes) of pseudowords alone. In Table 1 (last two columns) and Figure 3 we examine correlation between ground-truth adjective class and position on this dimension. In Table 2, we show that this dimension in CLIP space corresponds to interpretable semantics of real English words. We will emphasize these clarifications in a revised version. --- Rebuttal Comment 1.1: Title: Thank you Comment: I appreciate the very thoughtful rebuttal. Your explanation surrounding the motivation for the study (i.e., importance in the context of CogSci and a priori hypotheses about models) is well presented and thought provoking. I think this was the key thing missing for me with respect to motivation, and you’ve adequately addressed my concern here. So please put this in the paper. I also like your comments regarding the importance in evaluating AI systems with this perspective. So please include that in the paper as well. I also very much appreciate you implementing a more principled statistical analysis, so well done. Finally, regarding the user study: I don’t have strong feelings about whether to include or not (but you may want to consult the Ethical guidelines at NeurIPS about its inclusion). What I do feel strongly about is that you describe the protocol with more precise language (as you did in the rebuttal). Eg., in the paper you should write the population of participants you recruited (i.e., university graduate students) and that they were unpaid volunteers. Also, reporting the precise number that you collected in the main text, etc. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We will include all of these points in our revised paper.
Summary: This paper determines the extent to which pretrained vision and language models encode phonetic information associated with sharp or round objects. Prior research has shown a cross-lingual tendency to associate sounds with shapes in human studies. In this work, the authors investigate whether this holds for a discriminative (OpenCLIP) or generative (StableDiffusion) machine learning model. The methodology starts by constructing a set of 648 pseudowords based on a set of sharp / round Latin letters. Two prompts are used with the models, which either have the pseudoword used as a noun or an adjective. The prompts and pseudowords are used to show that both models do indeed exhibit a similar effect to humans. A human study shows a much stronger effect for the original kibi-bouba pair than the 648 pseudowords. The paper also claims that this finding cannot be attributed to the models seeing these examples during pretraining. Strengths: * Interesting study that is well-motivated. * Claims are clearly supported in the experiments. * Human evaluation is used to further support the automatic evaluation metrics. Weaknesses: * No major weaknesses that I could identify Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Do these results hold with a multilingual CLIP model or in a multilingual human evaluation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: * The paper does a good job of discussing potential limitations of the research Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments. To investigate whether sound symbolism may exist in a multilingual V&L model, we test the Kadinsky [1] multilingual text-to-image model (which uses multilingual CLIP) with our methodology, on four geographically and linguistically diverse languages: Finnish, Indonesian, Hungarian, and Lithuanian. Please see Tables 2 and 3 in the PDF attached to our global response for full results including metrics and prompt texts in each language. We find non-trivial sound symbolism in this setting in each language; for example, Finnish displays geometric and phonetic AUC 0.69 and 0.94 respectively, significantly higher than the 0.50 expected from random chance. These results suggest that sound symbolism may be learned in a multilingual V&L setting. Even with the results of this additional experiment, we wish to emphasize that we do not claim to demonstrate the universality of sound symbolism (as suggested by reviewer Cwqz). Although this is a topic of interest in the psychological literature, our work focuses on showing the existence of this phenomenon in V&L models, and not on whether it is a universal phenomenon cross-linguistically. We will state this more explicitly in a revised version of our work. [1] kandinsky-community/kandinsky-2-2-decoder on Hugging Face Model Hub --- Rebuttal Comment 1.1: Comment: Thank you for running an additional experiment using a multilingual model. The results are interesting and it makes me more confident in my assessment of the work.
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments. We respond here to shared concerns as well as items raised by reviewers JJeH and Cwqz, referring to individual responses where more information is given. **(A) Survey – reviewers JJeH, Cwqz, gvCK** The main focus of our work is automatic probing of computational models for sound symbolism; the user study suggests that these results are grounded in human cognition, but it does not test our central hypothesis (whether V&L models have learned sound symbolism). We saw value in showing this additional result and conducted the survey following the informal procedure described below, but we are willing to either remove it entirely or to replace it with a study using Mechanical Turk crowdworkers (following all relevant procedures) upon request. Regarding the methodology used, we used volunteers to avoid concerns about crowdsourcing and contract work, under the good-faith interpretation of NeurIPS’s ethics code that human studies refer to paid workers rather than consenting volunteers. This interpretation was made following discussions with several peers in our institutions, as well as observing that this practice has been followed in papers published in NeurIPS 2022 such as: * Zhang et al. Generalized One-shot Domain Adaptation of Generative Adversarial Networks. (“...we finally collect valid votes from 53 volunteers…”) * Hu et al. Hand-Object Interaction Image Generation. (“...we conduct a user study... There are 20 volunteers participating in this study.”) Following common practices in our institutions, our study was distributed among university graduate students who were not familiar with this research project, and we controlled for prior knowledge of the phenomenon (supp L196, L227). We controlled for native language (supp Table 5) as this would be a natural confounding factor. We did not collect any sensitive personal information or demographic data such as race or education level. While reviewer Cwqz mentions these as significant for showing the universality of sound symbolism, we clarify that this is not the stated aim of our work (noting that L32-35 are not discussing our research goals, but rather providing context from the psychological literature). We will clarify these points in a revised version (contingent on any decision regarding the inclusion of the user study). **(B) Multilingual evaluation – reviewers Cwqz, NTGx, kkX7** Please see the response to reviewer NTGx where we conduct an additional experiment to test the cross-lingual generality of our results. Full results for each language tested are shown in Table 2 of the attached PDF, and Table 3 shows the prompts used for each language. We find evidence for the effect in a multilingual V&L model for each of the four geographically and linguistically diverse languages, and we will include these results in a revised version. **(C) Tokenization – reviewers Cwqz, kkX7** Regarding tokenization, we agree that this could potentially contribute to the observed effect. However, we wish to emphasize that our approach tests V&L models end-to-end, agnostic to the source of these effects with respect to the relative contribution of different model-internal components. We do note that OpenCLIP and Stable Diffusion use the same tokenizer and yet show differences in the strength of their sound symbolic effects (e.g. L299), precluding this as the only source of the observed results. We also clarify that we do not adjust the vocabulary of the models or the tokenizers in any way; we use these models as-is and probe them in the zero-shot regime. We will further emphasize these clarifications in a revised version. Regarding reviewer Cwqz’s concern about items being “correctly mapped to their corresponding word vectors in CLIP”, we clarify that we deliberately design our pseudowords to avoid valid English words (L173) and thus they are indeed not found in the tokenizer’s vocabulary. This does not mean that the models are incapable of dealing with these inputs. This can be observed with many common English words, which are also not found in its vocabulary. For instance, “handkerchief” is split into three subword tokens (“hand”+”ker”+”chief”) by the tokenizer, but Stable Diffusion generations for this word show that it does indeed understand what a handkerchief is as a concept. Text encoders are capable of processing words that are not found in their vocabulary by splitting them into subword tokens, and we leverage this property to probe sound symbolism in these models. **(D) Pseudoword construction – reviewer JJeH** We clarify that the construction of pseudowords was fully automatic with no involvement of human participants. These were constructed using the combinatorial procedure described on L167-168 to include all possible combinations of letters matching the given pattern. **(E) Vision-audio models – reviewer Cwqz** While we agree that investigating such models would be an interesting and promising line of research, we base our investigation on an abundance of prior works in psychology and linguistics which investigate sound symbolism in written language. There is an implicit mapping between text and sound in spoken and written language, and the term “sound symbolism” is frequently used to refer to the possible connection between graphemes in text and meaning, as stated explicitly by [1, 2, 3] (which we cite on L88-91). As such, studying models trained on acoustic data would be an interesting direction for further research, but is out of scope of our study. We will emphasize this point in a revised version. [1] Cuskley et al. “Phonological and orthographic influences in the 384 bouba–kiki effect” [2] De Carolis et al. “Assessing sound symbolism: Investigating phonetic forms, visual shapes and letter 375 fonts in an implicit bouba-kiki experimental paradigm” [3] Cwiek et al. “The bouba/kiki effect is robust across cultures and writing systems” Pdf: /pdf/b8a3aa5ed1607321a754b5b7ddd13cd59032f728.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Contrastive Lift: 3D Object Instance Segmentation by Slow-Fast Contrastive Fusion
Accept (spotlight)
Summary: This paper aims to solve the task of 3D instance segmentation by leveraging pre-trained 2D instance segmentation models. The authors propose a novel approach to lift 2D segments to 3D via a neural field. This idea is not completely new [38, 51], but the authors propose a contrastive loss that replaces the Hungarian-algorithm-based loss used in [38]. Moreover, the authors propose a synthetic dataset where the number of objects can be controlled, and show that the Hungarian-algorithm-based loss slows down substantially as the number of objects increase. Finally, the authors augment the contrastive loss with a momentum-teacher component (similar to [16]). _[16] Olivier J Hénaff, Skanda Koppula, Evan Shelhamer, Daniel Zoran, Andrew Jaegle, Andrew Zisserman, João Carreira, and Relja Arandjelović. Object discovery and representation networks. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVII, pages 123–143. Springer, 2022._ _[38] Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Norman Müller, Matthias Nießner, Angela Dai, and Peter Kontschieder. Panoptic lifting for 3d scene understanding with neural fields. arXiv.cs, abs/2212.09802, 2022._ _[51] Suhani Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Genova, Mehdi S. M. Sajjadi, Etienne Pot, Andrea Tagliasacchi, and Daniel Duckworth. NeSF: Neural semantic fields for generalizable semantic segmentation of 3D scenes. arXiv.cs, abs/2111.13260, 2021_ Strengths: S1) The proposed scheme to deal with instance label ambiguity is novel. S2) The paper is mostly well-written. S3) The authors show that the proposed contrastive momentum-teacher loss gives good performance and is computationally lighter than the Hungarian-algorithm-based loss. S4) The problem of adapting 2D instance segmentation methods to 3D is useful to real-world applications. Weaknesses: W1) The notation has some issues, is confusing in some places, and deviates a bit from prior work. - It is not common to let $y$ represent a mapping. It is common to say that $f$ is a mapping such that $y = f(x)$. Same goes for $Y$. The label $y(u)$ is sometimes referred to as a label and sometimes as a function, which is a bit confusing. - I think $u$ should be the location of a pixel, rather than the actual pixel (which would be in $\mathbb{R}^3$). - $Y(x)$ is not presented as a function of $x$, but as a function of $u$. - $\Theta$ is introduced as both a mapping $\Theta: \mathbb{R}^2\rightarrow \mathbb{R}^D$ and $\Theta: \mathbb{R}^3\rightarrow \mathbb{R}^D$. - Also $\rho$, $c$, and $\Theta$ are introduced as both mappings and actual elements of the codomain of the mapping. - Usually, $x$ and $d$ are used for 3D location and ray direction. While it is not a crime to change notation, I cannot see a reason for replacing $d$ with $v$. - Equation (1) does not link to the input to the nerf ($x$ and $v$). The link can only be found in the text in l128-l129. It would be easier to understand if $R$ was actually described as a function of $u$. W2) The name Slow-Fast is confusing. - The loss component name slow-fast could have been named _student-teacher_ or something with _exponential moving average_. Currently, the name makes it easy to confuse with Slow-Fast [1001]. W3) The prior work [38] also adapts 2D instance segmentation models to 3D instance segmentation using NeRFs. The proposed approach replaces the matching-based loss and the "slow-fast"-component seems necessary for good performance. Is the comparison to [38] completely fair? Except for the proposed changes, are all other things equal, e.g., the underlying instance 2D segmentation approach, NeRF architecture, or training scheme? I find this difficult to tell from the paper text. It is clear that the proposed approach yields good performance and trains faster than [38], but it is not clear whether this is purely do to the proposed changes. Minor remarks - "much smaller" on l62 should probably be removed. _[1001] Feichtenhofer, Christoph, et al. "Slowfast networks for video recognition." Proceedings of the IEEE/CVF international conference on computer vision. 2019._ Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1) A prior work [1002] might be relevant to the 3D semantic segmentation background. The references tackles that problem and predates the references [23, 29, 51, 57]. Q2) How does the proposed approach compare to [38]? See W3. Q3) What does the statement on l290 mean? How does the proposed approach improve noisy 2D semgmentations? Q4) On l291, what does it mean for 3D reconstruction to work properly? _[1002] Lawin, Felix Järemo, et al. "Deep projective 3D semantic segmentation." Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I 17. Springer International Publishing, 2017._ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors provide a discussion that clarifies some important aspects, for instance that the proposed approach supports only static scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments which have helped us improve our paper! --- **Response to Weakness 1**: We are thankful for the reviewer's detailed feedback on the notation and presentation. * We will clarify that $u$ is the pixel location. * $\Theta$ should be $\Theta:\mathbb{R}^3\rightarrow\mathbb{R}^D$. And $\theta$, which is defined as $\mathcal{R}(u|\Theta,\rho,\pi)$ in Equation (3), should be $\theta:\mathbb{R}^2\rightarrow\mathbb{R}^D$. We will correct this typo. * We acknowledge the concern regarding the dual usage of {$\rho, c, \Theta$} as both mappings and codomain elements. Our intention was to maintain notation for ease of understanding. We'll ensure to clarify the context whenever dual usage occurs, aiming to minimize any confusion. * Thank you for the suggestion. We will change the notation to use $d$ for viewing direction, as in common in the literature. * We will ensure that the relationships between equations, especially Equation (1), and their corresponding inputs are clearly stated and explained. --- **Response to Weakness 2**: We acknowledge that the term "slow-fast" is a bit overloaded (e.g. *Feichtenhofer, Christoph, et al. "Slowfast networks for video recognition."*). However, as described in L149-154, we use a *slowly-updated* field to compute the centroid embeddings and encourage compact clustering in the *fast* field, which is a key element of our approach. We hope that this explanation in the paper can justify the usage of the term "slow-fast contrastive fusion" in our method. --- **Response to Weakness 3**: We have made every effort to ensure that the comparison to Panoptic Lifting [38] is completely fair. This can be noted as follows: 1. We purposely use the same underlying architecture as [38] so that we can fairly compare Panoptic Lifting’s matching-based instance loss with our proposed “slow-fast” formulation (*while keeping all other things equal*). 2. Apart from the instance segmentation related losses (namely, matching-based loss for [38] and our slow-fast loss), all the other losses and training scheme are identical to [38]. 3. We use the same underlying 2D panoptic (semantic+instance) segmenter for all the methods (which includes Panoptic Lifting [38] and our method). For example, we use Mask2Former [7] for experiments with ScanNet, Replica and Hypersim datasets. And use Detic [59] for experiments with our Messy Rooms dataset. We will highlight these aspects in the revised paper so that they are clear to the reader. --- **Response to minor remark**: Thank you. We will remove the phrase “much smaller” and only state that D<<L, which is more clear. --- **Response to Question 1**: Thank you for bringing this method to our attention. It is definitely an influential work in the direction 2D-to-3D semantic fusion to achieve 3D segmentation. We will include this in the discussion of related works. **Response to Question 2**: Please refer to our "**response to weakness 3**" above which clarifies how we ensure a fair comparison with Panoptic Lifting [38]. --- **Response to Question 3**: The proposed method fuses 2D segments in 3D space, promoting multi-view consistency across frames. By fusing information from multiple views, the approach is able to overcome the limitations of individual noisy 2D segmentations and produce more accurate and robust instance segmentation *in the 3D space*. As a result, when rendering 2D views from the 3D neural field model, we observe an improvement in quality compared to the original 2D segmentations that were used to train the model in the first place. To clarify this further, we quantify this improvement in the Table below. We report the PQ metric achieved on ScanNet [8] by three different 2D segmentation models: MaskFormer [Cheng et al., “Per-Pixel Classification is Not All You Need for Semantic Segmentation”], Detic [59] and Mask2Former [7]. We compare this to the PQ score achieved by our method when trained with each of these 2D models' predictions. We can see that our method improves the Panoptic Quality (PQ) significantly in each case. | Method | PQ | | -------- | -------- | | MaskFormer | 41.1 | | Contrastive Lift (trained with *MaskFormer* labels) | 61.7 | | Detic [59] | 42.0 | | Contrastive Lift (trained with *Detic* labels) | 61.6 | | Mask2Former [7] | 43.6 | | Contrastive Lift (trained with *Mask2Former* labels) | **62.1** | --- **Response to Question 4**: For 3D reconstruction to work properly, it means that the _learned density field_ accurately represents the geometry of the scene. The proposed method fuses the 2D segmentations from multiple views into a 3D neural field, and this fused field is obtained via differentiable volumetric rendering using the density field (denoted as $\rho$ in Section 3). Hence, to ensure multi-view consistency during the fusion, accurate geometry or density field reconstruction is crucial.
Summary: This paper introduces a novel Contrastive Lift method for 2D segment lifting to 3D reconstruction and instance segmentation. The authors fuse multiview representations obtained from pre-trained 2D models into a unified 3D neural field. They propose a scalable slow-fast clustering objective function that enables segmenting without an upper bound on the number of objects. Additionally, a new semi-realistic dataset is created to evaluate the proposed method, which demonstrates superior performance compared to previous state-of-the-art approaches on both public datasets and the newly introduced dataset. Strengths: 1. The research motivation and key challenges are generally well illustrated and summarized. 2. The newly constructed framework and dataset fits well within the 3D vision field, particularly the flexible and scalable design for lifting 2D segments to 3D. 3. The proposed method is technically sound and achieves SOTA performance on the newly proposed dataset and challenging scenes on public datasets. Weaknesses: 1. The paper lacks a discussion on the complexity of the proposed method and its potential impact. 2. Ablations on different 2D segmenters are not included in the paper. 3. The analysis of the ablations and their corresponding results is insufficient. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refere to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refere to Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! --- **Response to Weakness 1**: Lines 278-288 of the main paper provide an empirical analysis of the time complexity and training speed of our method and compares it to the linear-assignment matching-based method. In summary, the complexity of our proposed method is agnostic to the number of objects in the scene, while the compared matching-based method has a $O(K^3)$ complexity where $K$ is a hyperparameter that specifies the maximum number of objects. --- **Response to Weakness 2**: Thank you for the suggestion. Based on the reviewer's suggestion, we have conducted experiments with two more 2D instance segmentation models (*MaskFormer [Cheng et al., "Per-Pixel Classification is Not All You Need for Semantic Segmentation"] and Detic* [59]) in addition to *Mask2Former* [7]. The Table below demonstrates the Panoptic Quality (PQ) metric achieved on the ScanNet dataset [8] by each of these 2D models and by our method when trained with predictions from these 2D segmentation models. In all cases, we can see that our method improves the PQ metric significantly. | Method | PQ | | -------- | -------- | | MaskFormer | 41.1 | | Contrastive Lift (trained with *MaskFormer* labels) | 61.7 | | Detic [59] | 42.0 | | Contrastive Lift (trained with *Detic* labels) | 61.6 | | Mask2Former [7] | 43.6 | | Contrastive Lift (trained with *Mask2Former* labels) | **62.1** | --- **Response to Weakness 3**: Thank you for the suggestion. We will include the above ablation regarding the effect of different 2D segmenter models in the revised paper. We shall also expand the discussion in the ablation studies included in Section 5.2 of the main paper as well as Sections 3 and 5 of the supplementary material. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: Thanks for the rebuttal. My concerns have been addressed in a satisfying way. Thus I am happy to lift the rating to Accept.
Summary: This paper utilizes contrastive learning for lifting 2D segments to 3D and fuses the learned embedding by means of a neural field representation, namely Contrastive Lift. The authors further propose a slow-fast clustering objective function, which makes the method scalable for scenes with a large number of objects. To further validate the ability of the method, this paper also introduces a new dataset, Messi Rooms, which includes up to 500 objects as a benchmark for instance segmentation with a large number of objects. The experiments show that the proposed approach outperforms former SOTA on ScanNet, Hypersim, Replica, and Messy Rooms. Strengths: - The proposed approach employs a low-dimensional Euclidean embedding to represent a 3D instance. The dimensionality D is far less than the number of objects L, making the approach more efficient and easily extended to larger numbers of objects. Most importantly, using the 3D instance embedding implicitly ensures multi-view consistency. It avoids the assignment problem that exists in Panoptic Lifting [43]. - Using contrastive learning and the clustering strategy makes the proposed approach independent of the number of objects, which is more suitable for scenes with a large number of objects. - The slow-fast contrastive learning is scalable for different object numbers and stabilizes the training phase. And the proposed concentration loss ensures the concentration of embeddings within the same cluster. It improves the more complete instance segmentation results. - The proposed semi-realistic dataset, Messy Rooms, provides a novel benchmark for testing the performance on scenes with large object numbers. Weaknesses: - In Tab. 1 and Tab.2, the metric used for evaluation only uses PQ^scene^, which is cherry-picked. In Panoptic Lifting [43], mIoU and PSNR are also used for evaluation. A comprehensive comparison according to different metrics should be included. - For semantic segmentation, Contrastive Lift should append a new branch to predict the semantic labels and be supervised by the segment consistency loss in [43]. I suppose that the model should be trained specifically for semantic segmentation. Panoptic Lifting can predict semantic and instance labels simultaneously. I am curious about whether the additional supervision on semantic labels would influence the instance segmentation results. Please explain it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The random partition of pixels into two non-overlapping sets needs further explanation. (Randomly choose pixels? Randomly divide images with pre-defined boundaries? What is the proportion between two sets? etc.) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed some limitations in Sec. 6. There is no negative social impact in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for providing feedback and taking the time to review our work! --- **Response to Weakness 1**: We report the mIoU and PSNR of our method (and also for Panoptic Lifting and SemanticNeRF) in Table 2 of the **supplementary material**. In Tables 1 and 2 of the main paper, we only report PQ$^\text{scene}$ for brevity, as it is the most relevant metric for assessing instance segmentation quality, which is the main contribution of our approach. Since the semantic field and density/color field architecture of our method is identical to Panoptic Lifting [38], we expect to achieve similar mIoU (semantic segmentation performance) and PSNR (view synthesis performance) as [38]. This can be verified from Table 2 of supplementary material. --- **Response to Weakness 2**: We would like to clarify the following points about our architecture and loss functions: 1. As described in lines 168-176, our architecture has separate branches for density, color, semantic and instance prediction. Our architecture is the same as the one used in Panoptic Lifting, which also has these 4 separate branches. We purposely use the same underlying architecture so that we can fairly compare Panoptic Lifting’s matching-based instance loss with our proposed “slow-fast” loss formulation with instance embeddings (*while keeping all other things equal*). 2. We do have supervision on semantic labels, including the segment consistency loss (line 166 in the main paper). We will make this part more clear in the revised paper. 3. Note that the gradients from the semantic loss(es) do NOT propagate to the instance branch (and vice versa), as these are parallel/independent branches. That being said, supervising semantic labels does not affect instance segmentation performance but it affects panoptic segmentation which considers both semantic and instance predictions. --- **Response to Questions**: These pixels are chosen purely randomly from $\Omega$. Both of these non-overlapping sets are of equal size. To be more precise: say we want to sample two non-overlapping sets, each of size $N$. We first sample a batch of $2N$ pixels from $\Omega$ and then simply split it into two sets of size $N$. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: The authors address my concerns in the rebuttal. After reading the authors' replies and other reviews, I will keep my rating.
Summary: The proposed method tries to solve the problem of reconstructing a 3D scene together with the underlying instance segmentation. Prior work required either GT tracking data or concurrent work a less efficient way to assign instances. From a set of images a Neural Radiance Field is reconstructed together with a feature field, that represents an embedding of the instance. Instance embeddings are guided by a contrastive loss function, that pushes embeddings that are projected into pixels from the same segment in a 2D segmentation mask closer and embeddings projecting into different masks apart. To improve the stability of the training, the authors propose an additional loss with a jointly trained slowly-updated instance embedding field updated with a moving average over the parameters of the faster field instead of SGD. Instances are later computed by clustering embeddings, which is supported by the third loss term, which uses an average embedding from the slowly updated field to penalize the difference for the fast-field predictions. Specific values in the embedding vector are assigned to semantic classes for semantic segmentation and are directly supervised with the 2D semantic maps. Additionally, the authors propose a novel dataset with up to 500 objects for evaluating future 3D instance segmentation Strengths: So far 3D instance segmentation methods, such as Neural Panoptic Fields (referenced by the authors as well) required a tracking algorithm or GT tracking data to reconstruct instance labels of the 3D scene and this method presents a light, optimization-based approach, that directly learns an instance embedding from semantic maps through alignment. The authors describe their method in a way that is understandable and design choices, especially for the loss function and the learning paradigm are reasonable and justified by ablation and additional experiments. In general, this is a well-designed method that leverages the current state of the art and adds interesting new components to allow a jointly learning of the radiance field, instance and semantic embedding. A big plus of the presented method is the additional dataset with up to 500 objects and the submission of the code, that allows the reproducibility of the results. Weaknesses: While the presented evaluation of the proposed dataset shows a clear advantage over state-of-the-art methods on existing and their own synthetic dataset, as well as ScanNet, methods like Panoptic Fields have also shown results on complex outdoor scenes, such as the KITTI dataset. Therefore an evaluation in such a complex outdoor setting would further strengthen the paper and usability in future work. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - It seems like a fixed number of clusters is given or how is the number of clusters decided? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors address most of the limitations I can think of and ablate the use of different methods they rely on, such as the clustering method in the supplement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to study our work and provide thoughtful feedback! **Response to Weaknesses:** Thank you for suggesting the idea to assess the performance of our approach on outdoor scenes, e.g. *KITTI* or *KITTI-360* datasets. We agree that it would strengthen the paper. Unfortunately, we could not produce the results on the KITTI dataset in the given limited time for rebuttal, but we will include results on these outdoor scenes in the final version of the paper. As you have already pointed out, we show in Table 1 that our method outperforms Panoptic Neural Fields on indoor scenes. **Response to Questions:** We ***do not*** assume knowledge of the number of clusters. Instead, we use HDBSCAN (Hierarchical DBSCAN) [31] which is a density-based clustering algorithm. HDBSCAN does not require the number of clusters to be known. It computes a “mutual reachability distance” matrix between all pairs of datapoints and then uses a single linkage algorithm to find clusters in the data. The only hyperparameter in HDBSCAN is the minimum cluster size, which can be set to any reasonable value (e.g. $10^3$ if we have $10^5$ datapoints) or can be grid-searched on a fraction of the training dataset. --- Rebuttal Comment 1.1: Comment: Thanks for the additional comments and clarification! Given there is not much inconsistencies in the reviews, I will keep my rating and recommend acceptance.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and valuable feedback. We are pleased to see positive responses from all reviewers, who acknowledge the novelty [*zMUp*,*RRt9*], design [*KUUX*,*4dod*,*Vjqu*], efficiency [*4dod*,*RRt9*] and performance [*Vjqu*,*RRt9*] of our approach as well as the usefulness of the proposed dataset [*zMUp*, *KUUX*, *4dod*]. We have answered each reviewer's concerns/questions **separately in the individual responses below**. Additionally, we have attached a PDF that contains one figure which addresses a question from *Reviewer zMUp*. Pdf: /pdf/b96621487a477f2466a0d9a99a3464375842ad38.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies the 3D object instance segmentation inside a 3D NeRF space. Specifically, to train the model, conservative loss between features generated by slow and fast NeRF models are computed to 1) maximize the feature distance between different semantic regions, 2) minimize the feature distance within the same semantic regions. Also, a dataset is introduced namely Messy Room, which consists of rendering real captured objects from Google Scanned Objects. Experiments on the dataset show a reasonable improvement in comparison to baselines. Strengths: The proposed contrastive learning for 3D semantic segmentation on NeRF is elegant and novel. I would believe such a structure is considered to be better in comparison to previous works that directly output the semantic labels from the NeRF network. Also, the introduced messy rooms dataset is believed to be useful for the community, despite the dataset is partially synthetic. The overall performance improvements from the baseline of the proposed method are not huge but still reasonable. Weaknesses: 1) The proposed method is only evaluated on the half-synthetic dataset with small objects on the table. It is necessary to evaluate the proposed method on some real images, at least qualitatively. 2) The performance of the segmentations before lifting is not reported in the experiment section, it is unclear how the proposed method improves from the 2D segmentation. 3) The contrastive training pipeline is partially similar to this work [1], it would be better to include it in discussion. [1] Bai, Y., Wang, A., Kortylewski, A., & Yuille, A. (2020). Coke: Localized contrastive learning for robust keypoint detection. arXiv preprint arXiv:2009.14115. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How the ground plane is determined and set in the messy Room dataset? 2) The author mentions in line 148 that "gradients with high variance". What is a high variance on gradients? The author should quantitatively study how the variance is controlled using the proposed SF loss in comparison to vanilla contrastive loss. 3) A clustering operator is conducted on the feature vectors for segmentation, is this similar to this work? [2] 4) In the visualization, how does the trained feature vectors convert into semantic labels? 5) Some typos: in figure 2 L_cntr but in equation 5 L_conc. [2] Yu, Q., Wang, H., Qiao, S., Collins, M., Zhu, Y., Adam, H., ... & Chen, L. C. (2022, October). k-means Mask Transformer. In European Conference on Computer Vision (pp. 288-307). Cham: Springer Nature Switzerland. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors include and discuss limitations of this work. There seems no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper and provide feedback! --- **Response to Weakness 1**: We do evaluate on real scenes from ScanNet [8] dataset (please refer L211-212 and Table 1). We also evaluate using Replica [41], which comprises real 3D scans, and Hypersim [36], which is synthetic but designed to match the complexity of real scenes. In addition to quantitative results in Tables 1,2 and 3, Figures 1,2,3 in the supplementary show visualizations on ScanNet scenes. We will add more visualizations from these datasets in the final version. --- **Response to Weakness 2**: The performance before lifting is not shown because these instance segmentations (obtained from a 2D segmenter) are *not consistent (aka tracked) across frames/views*. Therefore, the $PQ^{scene}$ metric for these 2D methods is very low, because the metric is designed to reflect the cross-frame consistency of the predictions. **Following the reviewer’s suggestion, we evaluate performance of 2D segmentations (i.e., before lifting) in 2 ways:** 1. Table 1 (see below) reports the _frame-level_ PQ (Panoptic Quality) score for both the 2D segmenter and our method. Note that PQ only evaluates segmentation quality frame-by-frame, regardless of inter-frame consistency. The table shows that our method significantly improves the frame-level 2D segmentation quality. 2. Table 2 (see below) reports the PQ$^{\text{scene}}$ metric, which considers the consistency of instance segmentations across frames. To achieve consistency, we post-process the 2D segmenter's predictions using Hungarian Matching for cross-frame tracking as follows: * **w/ Hungarian matching (2D IoU):** Given sets of predicted segments ($P_i$ and $P_{i+1}$) from consecutive frames, compute IoU matrix by comparing all segment pairs in $P_i\times P_{i+1}$. Apply Hungarian matching to the IoU matrix to associate instance segments across frames. * **w/ Hungarian matching based on IoU after depth-aware pose-warping**: Use *ground-truth* pose and depth for warping $(i+1)$-th frame's segmentation to frame $i$. Compute IoU matrix using warped segmentations and apply Hungarian matching. * **w/ Hungarian matching using the 3D ground-truth pointcloud**: Using only consecutive frames leads to errors in long-range tracking. To address this, starting from 1st frame, un-project 2D segmentations into a 3D point cloud. Iteratively fuse these segments in 3D using Hungarian matching. This way, segments from *all preceding frames* along with 3D information are used for tracking. We note that the last two baselines use the _3D ground-truth_ for matching. The table below shows that, even when 3D information is used, our method still significantly outperforms the baseline 2D segmentation approach. **Table 1**: | Method | PQ (on ScanNet [8]) | | ------------------------------ | ------------------: | | Mask2Former [7] (2D segmenter) | 43.6 | | Contrastive Lift (**ours** trained w/ Mask2Former labels) | **62.1**| **Table 2**: | Method | PQ$^{\text{scene}}$ (on ScanNet) | | -------- | --------: | | Mask2Former (*M2F*) (non-tracked 2D segmentations) | 32.3 | | M2F + Hungarian matching (2D IoU) | 33.7 | | M2F + Hungarian matching based on IoU after ***depth-aware pose-warping*** | 34.0 | | M2F + Hungarian matching using the ***3D ground-truth pointcloud*** | 41.0 | | Contrastive Lift (**ours** trained w/ Mask2Former labels) | **62.3** | --- **Response to Weakness 3**: Thank you for bringing this method to our attention. CoKe is an interesting method that uses contrastive learning along with moving average updates to learn keypoint prototypes. We will include CoKe in the discussion of related methods. --- **Response to Question 1:** The Messy Rooms dataset is generated using the Kubric [12] simulator. We configure the "gravity" vector along the "-z" axis and designate the xy-plane as the ground-plane. For details, see the code in `dataset/kubric_panopli_generator_final.py`. --- **Response to Question 2:** In L148, we refer to the variance in loss gradients w.r.t. the embedding field ($\Theta$), i.e. variance of $\nabla_{\Theta} L$. Empirically, we compute the relative variance ($\frac{Var(\cdot)}{Mean(\cdot)}$), finding significantly higher values (across training iterations) with the vanilla loss as compared to the slow-fast version. Please see Figure 1 in the attached PDF. The vanilla loss shows spikes with a peak relative variance near $10^7$, while the slow-fast variant maintains around $10^1$. --- **Response to Question 3:** *kMaX-DeepLab* learns a pixel-cluster assignment by reformulating cross-attention from a clustering perspective. So, it is indeed similar in spirit to our approach, although it's worth noting that the embeddings (and cluster centers) in our method (defined as $\mathbb{R}^3 \rightarrow \mathbb{R}^D$) are learnt using *differentiable volumetric rendering* from 2D labels. We will include a discussion of *kMaX-DeepLab* (along with “CMT-Deeplab: Clustering Mask Transformers for Panoptic Segmentation”, Yu et al. and “Semi-convolutional Operators for Instance Segmentation”, Novotny et al.) in the revised paper. --- **Response to Question 4:** For the ***semantic*** labels, since the features were trained with a cross-entropy loss (similar to [38,57]), we obtain the label as the `argmax` of rendered logits. For the ***instance*** labels, as described in lines 177-181 of main paper and in section 2.3 of supplementary material: 1. the rendered instance features are clustered with HDBSCAN [31], an unsupervised density-based clustering algorithm, to obtain cluster centroids. 2. for pixels in any novel view, the label of the centroid nearest to the rendered embedding is assigned as the instance label. --- **Response to Question 5:** Thank you for pointing this out. It should be $L_{conc}$ in figure 2. We will fix this in the revised paper. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: Thanks for the rebuttal. The rebuttal address most of my concern. I will increase my final rating to accept.
null
null
null
null
null
null
Anonymous and Copy-Robust Delegations for Liquid Democracy
Accept (spotlight)
Summary: The paper studies fractional allocation rules when each agent indicates a ranking over the agents that agrees to represent her, to overcome impossibility results of deterministic rules. The authors consider two different rules which are shown that are equivalent. They also provide a polynomial time algorithm for finding the outcome of one of the two rules. Strengths: -Very well-written paper -Interesting model -Natural extension of the axioms to fractional allocations -Technically involved and interesting results -Use of known lemmas in interesting and elegant ways Further notes: Line 73: "casting voter" Line 134: I think you mean "|B|=|V(G)|-1" Weaknesses: The choice of the top-rank priority axiom, while it is quite natural, seems a bit arbitrary. It would be nice to show what kind of other axioms would lead to the same impossibility result. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The choice of top-rank priority is motivated by the proof of the impossibility theorem for the non-fractional setting (Section 1). Consider the example instance with one delegating voter $v_1$ and three casting voters $s_1$, $s_2$, $s_3$, where $v_1$'s first delegation preference is $s_1$ and her second (and last) delegation preference is $s_2$. We show that any anonymous, confluent and copy-robust rule must assign $v_1$'s weight to $s_2$, which contradicts the idea of delegation preferences. The following versions of a fourth property can conclude the impossibility result, without changing any of the results of our paper. (i) In a situation where a voter $v$ has exactly one outgoing edge of rank one and exactly one of rank two, leading directly to casting voters $s_1$ and $s_2$, respectively, a delegation rule should assign all of $v$'s voting weight to $s_1$.\ (ii) In a situation where $v$ has exactly one outgoing edge of rank one and it leads directly to a casting voter $s_1$, a delegation rule should assign all of $v$'s voting weight to $s_1$.\ (iii) A delegation rule should distribute $v$'s voting weight only over casting voters reachable via a walk through the graph that starts with a rank one edge. While (i) is extremely specific to the situation in the proof of the impossibility result, (iii) is relatively general. At the same time it is debatable whether (iii) is a desired property, while this is quite clear for (i). Due to this trade-off between arbitrariness and certainty of desiredness we choose (ii) as the definition of top-ranked priority, being slightly more general than (i) while still certainly being desirable. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I don't have other questions.
Summary: This paper primarily examines two equivalent fractional delegation rules for Liquid Democracy (transitive delegate voting): Mixed Borda Branching and Random Walk Rule. The main contribution of this paper is the equivalence between these two rules in the generalized setting of fractional delegations. This result is complemented by an axiomatic analysis of the rules w.r.t. notions of anonymity, copy-robustness, confluence, and top-rank priority. Strengths: The equivalence between the two delegation rules is non-obvious and a significant contribution. Moreover, the proof of equivalence is highly non-trivial. These results in Sections 4-5 are of considerable value. But I am not sure if those alone are enough to carry the paper (see Weaknesses). Although perhaps they are. Weaknesses: Unless I have misunderstood something, it doesn’t appear that the axiomatic analysis really solves, or resolves, any of the important issues discussed in the introduction. It circumvents them instead in a way that feels cheap. First, allowing fractional delegations clearly means confluence is no longer a constraint. The authors state essentially this in an incredibly oblique way in the proof of Theorem 8, “One can verify that every delegation rule that can be formalized via a Markov chain on the delegation graph (G,c) satisfies confluence.” Confluence requires that if v1 delegates to v2 that the remainder of their delegation path much be consistent. Suppose, for example, that v1 is the only voter delegating to v2, and v2’s delegation is split between v3 and v4. No we “send” ½ of v1’s vote and ½ of v2’s vote to v3, and the remaining ½ + ½ to v4. Of course, this is functionally equivalent to sending v1’s entire vote to v3 and v2’s entire vote to v4, violating confluence. In reality, the axiom of confluence wasn’t generalized, it is made irrelevant, because any delegation that violates confluence can be trivially converted to a fractional one that satisfies its relaxation. Whether we talk in terms of dividing vote tokens or probability distributions over delegations doesn’t matter, it’s the same principle. Similarly, simple path vs. walk makes no difference here. Second, the authors state that the purpose of giving delegations as orderings is to prevent delegation cycles and isolated voters. In footnote 3 on page 4, the authors admit that they handle isolated voters by ignoring them entirely and removing them from the graph. However it is not discussed how this bears on the axioms. Ignoring such voters is the same as assigning them a voting weight of zero, whereas if they cast their vote directly they get a voting weight of at least one (and possibly more if other isolated voters delegate to them). This violates copy-robustness, but this violation does not appear to be mentioned. Lastly, while this is a minor note, the authors should not say that they generalized the axioms, but rather that they relaxed the axioms. The authors generalized the class of delegation rules, and relaxed the axioms required. Edit: Based on the author(s) rebuttal, I have improved my understanding of the paper and its results.I no longer stand by my original criticism. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1: Am I correct in saying that ignoring the isolated voters ultimately violates copy-robustness? Q2: Am I correct that fractional delegation trivializes the axiom of confluence? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No further limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Am I correct in saying that ignoring the isolated voters ultimately violates copy-robustness?\ **A:** No.\ Before justifying our answer, we want to clarify the handling of isolated voters. The motivation for introducing ranked delegations is to mitigate the risk of having isolated voters, i.e., voters that cannot reach any casting voter through a chain of delegations. Of course even with ranked delegations there can still be isolated voters, however it was empirically shown (Brill et al. [2022]) that few backup delegations lead to almost no isolated voters in many random graph models. We remove remaining isolated voters from the instance as is standard in the liquid democracy literature, since there is no way of assigning them reasonable representatives. We can then assume for all our definitions that each delegating voter has a path to a casting voter in the delegation graph. That said, the axiom copy-robustness captures the impossibility of manipulation by a delegating voter copying the vote of its representative(s). Assuming, this delegating voter knows its representative(s) chosen by some delegation rule, the voter could copy their vote instead of delegating. Copy-robustness then requires that this does not change the joint voting weight of the delegating voter and its representative(s), thus preventing manipulations of this type. Since an isolated voter has no representatives and therefore nobodies vote to copy, the described situation of an isolated voter deciding to being a casting voter instead does not classify as a manipulation and is therefore not (and should not be) captured by the copy-robustness axiom. In fact, in a model that includes isolated voters, an isolated voter increasing its voting weight by casting its vote would be expected (and desired) behavior. \ **Q2**: Am I correct that fractional delegation trivializes the axiom of confluence?\ **A:** No.\ Before justifying our answer, we would like to emphasize that a delegation rule **assigns each delegating voter** a probability distribution over casting voters. Crucially, confluence captures a consistency across the distributions assigned to different casting voters. \ Considering only the total voting weight of v3 and v4 in the example given by the reviewer and calling all delegation rules yielding this outcome 'functionally equivalent' contradicts our central definition of a delegation rule and also the idea of confluence to compare the voters individual assignments. This means that even though the two example assignments yield the same outcome in terms of total voting weight, the underlying delegation rules are different and can have different axiomatic properties. To show the non-trivial nature of confluence on an example, assume that in the example given by the reviewer, voter v2's first preference for delegation is v3 and v4 is the second. In this extended example, top-rank priority alone would only require v2's vote to be delegated fully to v3. However, if we enforce confluence as well, then v1's vote must be delegated to v3 as well. **C:** Lastly, while this is a minor note, the authors should not say that they generalized the axioms, but rather that they relaxed the axioms. The authors generalized the class of delegation rules, and relaxed the axioms required.\ **A:** We would like to clarify exactly what we mean by 'generalizing' an axiom. The axioms we examined were defined on a model without fractional delegations, which we generalized to a model with fractional delegations, as recognized by the reviewer. Some of the axioms then needed adjustment to this new setting. We adjusted the axioms in a (to our perception) natural way, making sure that if applied to the special case of a non-fractional delegation rule, they correspond precisely to the original axioms. (For all axioms besides confluence this fact is easy to see. For confluence, we recently wrote a proof for this claim, which we are happy to provide upon request.) We therefore refer to the new axioms as 'generalized' in contrast to 'relaxed' which is the term we would use for an axiom that was weakened, i.e. that is implied by its original version. --- Rebuttal Comment 1.1: Title: Updating my View Comment: First, I thank the authors for their detailed answers to my questions. I can see now that view that my criticism that fractional delegation trivializes the axiom of confluence was incorrect. I retract this criticism. It still appears there is a small problem with isolated voters, but this is an inescapable part of the nature of the problem. There is, in a sense, a violation of the axiom, but not in a way that conflicts with the motivation, and this is certainly not a reason for rejection. My view of the paper is now strongly positive and I argue for acceptance.
Summary: The authors study liquid democracy with fractional (i.e., splittable) delegations. They extend previous work by Brill et al. on delegation rules that satisfy anonymity and copy-robustness, and demonstrate that two delegation rules (mixed Borda branching and the random walk rule) that were previously thought to be different and each satisfy one property are actually the same rule that satisfies (generalized versions of) both properties. Their algorithm for computing the outcome of the combined rule is also the first efficient algorithm for a problem in semi-supervised learning, the directed power watershed. Strengths: + The paper is well-written and easy to follow, and the problem they study is well-motivated in the context of liquid democracy. + The algorithm is nontrivial and of independent interest to other communities in computer science. + It's also nice to see that two previously-proposed rules are actually the same; it's extra satisfying that this rule happens to satisfy generalizations of two (really, four) axioms that were thought to be hard to simultaneously satisfy. Weaknesses: - My main hesitation with this paper is the use of fractional delegations in liquid democracy. One central tenet of LD is the ability to immediately remove a delegation from someone who cast a vote in a way you did not approve of, which becomes much more difficult with splittable delegations. Additionally, if you allow agents to know exactly where portions of their vote ended up, they will potentially be given a lot of information about the rest of the delegation network. - Typos: line 73: casting, line 166: copy-robust, not copy-robustness, line 342: lens Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Have you thought about ways of explaining the outcomes of MBB / RWR to users of liquid democracy systems? Are there intuitive ways of showing the flow of splittable votes through the network? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A possible explanation could be the following reinterpretation of the rules, motivated by the nature of the algorithm. In the last step of the algorithm we solve an absorbing Markov chain on a partially contracted delegation graph, where the outgoing edge probabilities of each contracted vertex depend on its inner structure of delegation preferences. The assignment of the rule will then be exactly the same for any voters contained in the same contracted vertex. The rule could therefore be interpreted (and explained) as a rule aggregating the lower delegation preferences of voter clusters that only delegate to one another with their higher preferences. We could then present the flow constructed from the Markov chain on the contracted graph as an explanation on how the aggregated delegation preferences are finally resolved into delegations. This flow is polynomial time computable, since it is a byproduct of the computation of the absorbing probabilities. A downside of this method is that it only offers a partial explanation since the aggregation of delegation preferences is not explained in detail. If we want to construct a flow on the (uncontracted) delegation graph directly, there are multiple natural ways to do this for each delegating voters vote independently, depending whether we look at the rule from the mixed Borda branching or random walk perspective. However, these individual flows may be inconsistent with another, in the sense that the relative split of flow in a specific voter might be different in the flows of different voters. In fact, we can show that it is in general not possible to construct consistent flows that correspond to the assignment of our rule for all voters.
Summary: The paper studies liquid democracy where voters can delegate their votes to others instead of casting them directly. Within this framework, some voters act as casting voters, while others delegate their votes. Delegation rules determine how casting voters are chosen for each set of delegating voters. However, voters have preferences regarding whom they trust more to represent their votes. When the entire vote must be delegated, it becomes challenging to satisfy all of several desirable axioms simultaneously. Instead, the authors explore fractional/ probabilistic delegations, allowing a vote to be split across casting voters. They demonstrate that by employing the random walk rule, it is possible to recover the possibility of meeting all axioms. Moreover, the authors note that the distribution over branchings, ensuring that every delegating voter is connected to a casting voter, follows a uniform distribution over all minimum-cost branchings. This connection has also been previously observed in the work of Fita Sanmartin et al. To compute the delegation distribution efficiently, the authors propose a polynomial time algorithm that builds upon Fulkerson's algorithm. Strengths: - Fractional delegations are a natural extension and furthermore they are transparent in the sense that a voter can know what proportion of her vote was transferred to which casting voter. - The result on the possibility of confluence, anonymity and copy-robustness if we move to the realm of fractional delegations is nice to have. - The authors leverage results from graph theory and combinatorial optimization to obtain their results, hence introducing techniques from this literature to their community. Weaknesses: - Equivalence of Mixed Borda Branching and Random Walk rule: Given an understanding of the Markov chain rule, the equivalence between the Random Walk rule and Mixed Borda Branching seems immediate. The authors' claim of surprise at this equivalence is somewhat perplexing, as the connection should have been anticipated with knowledge of the Markov chain rule. It would have been more understandable if other authors had defined Mixed Borda without recognizing this link, but the authors themselves acknowledge that this interpretation has already been observed by Rita Sanmartin et al. Consequently, the discussion on equivalence could be shortened by referring to Rita Sanmartin et al.'s work rather than presenting it as a surprising result. - Algorithm 2 and Directed Powershed: Unfortunately, I cannot confirm whether the Directed Powershed algorithm is novel to this paper. It appears that Sanmartin et al. discuss an extension of the undirected version to address the directed case (Page 5: "In section 5, we show how the Power Watershed can be generalized to directed graphs by means of the DProbWS.”, Page 8: "In the ProbWS paper [12], it was proven that the Power Watershed [6] is equivalent to applying the ProbWS restricted to the minimum cost spanning forests. This restriction corresponds to the case of a Gibbs distribution of minimal entropy over the forests. In this section, we will prove the analogous result for the DProbWS: When the entropy of the Gibbs distribution over the directed in-forests (3.1) is minimal, then DProbWS is restricted to the minimum cost spanning in-forests (mSF). This permits us to define a natural extension of the Power Watershed to directed graphs. “). So they claim to leverage existing knowledge (Power Watershed: A Unifying Graph-Based Optimization Framework, 2010). However, if the problem remains unsolved, a discussion of the Power Watershed algorithm and its limitations in extending to the directed case would be necessary. It is worth noting that Algorithm 2 has limitations in practicality, with a runtime of O(n^7) (up to log factors), while Power Watershed may offer better efficiency. - The anonymity gained through randomizing over mincost branchings seems not surprising,. It is unclear where the challenges lie in proving that the other properties still hold ( as the authors seem to demonstrate more general versions of the axioms). It would be valuable to have an in-depth discussion of the supervised learning literature, as the problem of liquid democracy appears to be studied under different names with distinct requirements in various fields. Exploring the Power Watershed algorithm and its directed extension, along with providing a comprehensive explanation of the connections between different fields, would be a very useful contribution. This could introduce the supervised learning literature to the computational social choice community, while also allowing the supervised learning community to benefit from the axiomatic analysis conducted in this work. If the extension of power watershed algorithm to the directed from the literature turns out to be faulty or very unclear, I will increase my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you provide an explanation for why Fita Sanmartin et al.'s claim about extending the Power watershed algorithm is incorrect or why it cannot be straightforwardly extended? How does the runtime of your algorithm compare to that of Power Watershed? - What makes the Equivalence of Mixed Borda and Random Walk Rule surprising? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No concerns, limitations are addressed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: If accepted, we are happy to use the additional page to elaborate on the connection to semisupervised learning, in particular, by providing an in-depth discussion of [1] and [2]. ## Q1 **Short** Indeed, we think that some formulations in [1] are ambigious and give the impression that the paper would introduce an **algorithm** for finding the directed Power Watershed solution. Instead, the paper presents an algorithm (DProbWS), which is only well-defined for a fixed parameter $\mu$ and shows that solving a parameterized version of DProbWS and then taking $\mu \rightarrow \infty$ would lead to the directed Power Watershed solution. However, the question whether a parameterized version of DProbWS can be solved efficiently is completely left open. While [2] presents a similar result in the undirected case, they also present an efficient algorithm for the limit case. Thus, when [1] mention a "generalization" to directed graphs, we understand that this only refers to the former result but not to the presentation of an algorithm solving the limit case. **Detailed** The paper introduces a digraph $G=(U \cup S,E)$ with a cost function $c$ on the edges and weight function: $$w(e) = e^{-\mu c(e)}.$$For every $q \in U,s \in S$, the algorithm solves the linear system $L \cdot x_q^s = -B_s$, and outputs $x_q^{s}$. Importantly, the matrix $L$ and the vector $B_s$ consist only of sums of $w(e), e \in E$. Hence, when $\mu \rightarrow \infty$, all elements of $L$ and $B_s$ may be zero (e.g., if all costs are positive), and the linear system may have infinitely many solutions. In Rem. 1 of [1], the authors acknowledge that the sum of weights of all branchings connecting any $q \in U$ to $S$ needs to be non-zero in order for the algorithm to be well-defined. This is violated for $\mu \rightarrow \infty$. We cite Thm 5.1. from [1] in our words: If $\mu \rightarrow \infty$, then $$x_q^{s} = \frac{\text{no.min-cost bran.connecting $q$ to $s$}}{ \text{no.min-cost bran.}},$$ where $x_q^s$ is defined by DProbWS. Given that $x_q^{s}$ is not defined for $\mu \rightarrow \infty$, we believe a more accurate variant of the statement would interpret $x_q^{s}$ as a function of $\mu$ and state $$\lim_{\mu \rightarrow \infty} x_q^{s}(\mu) = \frac{\text{no.min-cost bran. con. $q$ to $s$}}{\text{no.min-cost bran.}}.$$ This is also shown in the proof. While DProbWS can compute $x_q^{s_1}(\mu)$ for any $\mu \in \mathbb{R}$, its running time increases in $\mu$, as the running time of solving a linear system depends on the size of the input. Alternatively, one could compute the function $x_q^{s_1}(\mu)$ and then take its limit. We thought along these lines, and tried to build upon alg. for parameterized Markov chains, however, all of the alg. that we found (e.g. [3]) have exponential running time. ## Q2 **Short** While the power watershed alg. [2] (PW) is similar in spirit to our algorithm, the fact that we consider directed trees with dedicated root nodes leads to a more complex algorithm with a higher running time. We believe that this complexity is inherent to the problem and, to some extent, unavoidable (see detailed answer). We don't find this surprising: Even the classic min-cost spanning tree problem can be solved by a greedy algorithm and forms a Matroid, but this structure gets lost when moving to its directed variant. **Detailed** Recall the goal of PW (our Alg. 2, respectively). Given an undirected (directed, resp.) graph $G=(N \cup S, E)$ with cost function $c: E \rightarrow \mathbb{N}$, compute for every $v \in N$, $s \in S$, the value $x_{v,s}$ corresponding to the relative number of min-cost spanning forests aka MSF (min-cost branchings, resp.) in which $v$ reaches $s$. To illustrate the different complexities, we restrict ourselves to the special case when the subgraph induced by $N$, i.e., $G[N]$, contains one connected component (strongly con. comp., resp.), edges in $N \times N$ have cost $1$ and edges in $N \times S$ have cost $2$. Undirected Case: Any MSF in $G$ consists of a spanning tree in $G[N]$ and one edge from $N \times S$. Importantly, any min-cost spanning tree in $G[N]$ combined with any edge from $N \times S$ forms a MSF. Making use of this property, PW contracts all nodes in $N$ (without any additional computation), and then only needs to compare the number of edges in $N \times \{s\}$ for each $s \in S$ to compute $x_{v,s}$. Directed Case: This time, any min-cost branching in $G$ consists of an in-tree in $G[N]$ and one edge from $N \times S$. However, since in-trees have dedicated root nodes, we cannot combine any in-tree in $G[N]$ with any edge in $N \times S$. Hence, before contracting the set $N$, Alg. 2 needs to compute the relative number of in-trees rooted at each $v \in N$ in order to calculate $x_{v,s}$ in the next step. Coming back to the general case, while both algorithms construct a hierarchical structure of subgraphs, the crucial difference is that PW only carries out calculations at the top level of this hierarchy while Alg. 2 needs to carry out calculations at each level. This leads to a blow up of the running time (ignoring log factors) from $\mathcal{O}(n^3)$ (to the best of our understanding) to $\mathcal{O}(n^7)$. That said, our goal, given the broad scope of the paper, was to obtain an exact and poly-time alg. and our upper bound might be improved by a more involved analysis. ## Q3 We agree that, for experts on Markov chains, Thm. 5 is unlikely to be surprising. However, this does not hold for people not being aware of the Markov chain tree theorem, and we expect this to be the case for large parts of the paper's audience. Thus, we think that Sec. 5 is of interest for many readers and by presenting the very short proof, we are upfront about the fact that the result follows easily from known results. That said, we agree weaken our statement to, e.g, "while this result might be surprising for some readers, in fact, it follows rather easily by building upon...". --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. Discussing the connection to supervised learning earlier in the paper will contribute to its suitability for NeurIps and this will also be a good place to mention that the equivalence of the two rules also follows from results in [1]. Given that, contrary to my initial impression, [1] does not already solve your problem, I will increase my score.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewers for taking the time to review our submission. In the following, we address each of the reviewer's comments and concerns individually. ## References [1] Fita Sanmartin et al. "Directed Probabilistic Watershed." (2021)\ [2] Couprie et al. "Power watershed: A unifying graph-based optimization framework." (2010)\ [3] Hahn et al. "Probabilistic reachability for parametric Markov models." (2011)\ [4] Brill et al. "Liquid democracy with ranked delegations." (2022)
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies algorithms for assigning delegation weights in liquid democracy in the setting where participants indicate a set of trusted delegates, along with a ranking describing their preferences over these delegates. Notably, the algorithms it considers permit fractional delegations - i.e., voters have voting weight 1, and they can delegate it fractionally across multiple possible delegates. The paper considers two algorithms for finding a delegation graph (i.e. an assignment of fractional delegations from voters who wish to delegates, to trusted delegates). These two algorithms are: (1) the random walk rule, a Markov-chain based rule proposed in past work; and (2) the mixed Borda branching rule, which is a uniform average over all min-cost branchings, called “borda branchings”, where a branching occurs on a weighted digraph representing to which casting voters each voter is willing to delegate to. The paper makes three main contributions: (1) Provides a polynomial-time algorithm for computing mixed Borda branching. This algorithm is an adaptation of Fulkerson’s algorithm, and relies on two of its canonical properties (as proven by Fulkerson). This algorithm is of independent interest to the direct power watershed problem. (2) Proves the equivalence of rules (1) and (2) above. (3) Shows that rule (1) (and therefore rule (2)) satisfies three axioms: confluence, anonymity, and copy-robustness — a combination of axioms identified in past work that cannot be simultaneously achieved with delegation rules that require voters to delegate their voting weight to a single delegate. Strengths: - The high-level structure of the paper is clearly laid out - Some aspects of the paper’s analysis (e.g., axiom iii) are practically motivated, and the setting of liquid democracy in general is well-motivated - The authors made an effort to make the paper understandable, with multiple diagrams to aid explanations - The paper speaks to and builds on multiple aspects of the literature (e.g., random walk rule, existing axioms) - The technical results seem to holds, and there is sufficient technical exposition to understand why the results are true. Weaknesses: 1) The potential impact of this paper is not clear entirely to me. What is the main technical challenge with proving the results and what makes it hard? Why are axioms (i) and (ii) important for a rule to satisfy in practice? 2) I found it very hard to understand several phrases in the introduction. In many cases, it seems that the paper is assuming too much specific knowledge of the liquid democracy literature. For example: - Line 47: “(i.e., a digraph with a rank function on the edges)” what is a “rank function on edges”? - Line 51: In the explanation of confluence, what is a “subpath of v1”? - Line 64: “However, since none of the axioms connects the meaning of the ranks to the decisions of the delegation rule” I do not understand what this sentence means. - Line 66: “A single top-rank edge to a casting voter should always be chosen over any other delegation path.” I don’t understand what this sentence means. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Was it previously known whether the random walk rule satisfies these three axioms simultaneously? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q:** Was it previously known whether the random walk rule satisfies these three axioms simultaneously?\ **A:** No. For both copy-robustness and confluence, the question whether the random walk rule satisfies (reasonable generalizations of) the axioms were open. In particular, in order to prove the copy-robustness result, we heavily use both the equivalence towards Borda Branching (Theorem 5) as well as the algorithm for computing the Borda Branching outcome (Section 4). Lastly, the fact that the random walk rule satisfies anonymity -- while not explicitly mentioned in the literature -- is rather straightforward and should not come at a surprise. **Q:** What is the main technical challenge with proving the results and what makes it hard?\ **A:** The main technical contribution is threefold: (a) the development of our algorithm and its proof of correctness (Algorithm 2, Theorem 4), (b) the copy-robustness proof (Theorem 7), and (c) the confluence proof (Theorem 8). We describe the main challenges below: (a) Two crucial building blocks that serve as the base of the development of the algorithm are two results coming from different research fields: (1) the Markov chain tree theorem, and (2) Fulkerson's algorithm. While the Markov chain tree theorem allows us to count branchings in graphs without costs, the Fulkerson's algorithm provides us with a tool to divide the graph with a cost function into subgraphs. As a result we were able to reduce the task of counting min-cost branchings in the original graph to the task of counting branchings in subgraphs. These subgraphs are not disjoint from on another and the counting has to be done along the hierarchy prescribed by Fulkerson's algorithm, leading to a delicate construction of the subgraphs, including a non-trivial choice of weight functions (not to be confused with cost functions). We also refer to our answer to question 2 of reviewer rGK4 for a detailed comparison between the complexity of our algorithm and the (undirected) Power Watershed. (b) The main challenge here is that copy-robustnesss is an axiom which is defined across multiple input instances. Hence, we needed to relate the output of our algorithm across different instances. To this end, we proved Lemma 2 (i), which is, to the best of our knowledge, new to our paper. (c) Here, the main challenge is that confluence prescribes the existence of a probability distribution over walks in the original graph, however, our algorithm for mixed Borda branching contracts the graph. Hence, we build upon a probability distribution over walks in the contracted graph, from which we then derive a distribution over walks in the original graph. **Q:** Why are axioms (i) and (ii) important for a rule to satisfy in practice? **A:** (i) Confluence is seen as desirable in order to guarantee the liability of voters for their delegations ([4]). The idea is that voters keep their delegations over time and can therefore evaluate their representative. Consider this simplified situation: If $v$ delegates directly to a set of casting voters and the delegation rule assigns some distribution over these representatives, then we can assume that $v$ takes responsibility for his/her delegation decision (at least over the long run). If now voter $w$ delegates only to $v$, then, confluence prescribes that $w$ receives the same fractional assignment to casting voters as $v$ received. The rationale for that is that $v$ delegates its vote to $w$ and therefore wants his/her voter to be treated as $w$'s vote itself. Otherwise, the liability of $v$ towards its delegation decision is worthless from the point of view of voter $w$. This idea naturally extends to more complicated situations. (ii) Anonymity is important due to fairness considerations. Consider the simple example given in the introduction of our paper. If $v_1$ would be assigned to $s_1$ (its second preferred delegate) while $v_2$ would be assigned to $s_1$ (via its first preferred delegate), then $v_1$ could complain arguing that the two voters -- despite being in a symmetric situation -- were not treated equally. **C:** I found it very hard to understand several phrases in the introduction. [...]\ **A:** We thank the reviewer for the feedback and will revise the introduction for the next version of the paper, in order to make it more accessible for a broader audience. Below, we clarify some of the unclear phrases. **Q** Line 47: “(i.e., a digraph with a rank function on the edges)” what is a “rank function on edges”?\ **A** In social choice, the term "ranking" is used interchangeably with the notion of a weak or total order (more precisely, a strict ranking corresponds to a total order and a weak ranking to a weak order). Any (weak or strict) ranking can be naturally represented by a function $r$ mapping the elements to be ranked (in our case edges) to the natural numbers with the interpretation that $e \succeq e'$ if and only if $r(e) \leq r(e')$. **Q:** Line 51: In the explanation of confluence, what is a “subpath of v1”?\ **A:** Here, we are considering a path $P$ starting in a voter $v_1$, going via a voter $v_2$, and then ending in some casting voter $s$. The ``remaining subpath of $v_1$'' then refers to the suffix of the path $P$ starting from the occurence of $v_2$ and ending in $s$. **C:** Line 64: “However, since none of the axioms connects the meaning [...]\ **A:** Our model assumes that delegations with a lower rank are preferred over delegations with a higher rank (we also use the term costs). However, the axioms (i)-(iii) do not reflect this interpretation. In particular, there exists a delegation rule satisfying the three axioms with the following, undesirable behavior: Consider a trivial instance with one voter that delegates to casting voter $s_1$ with rank 1, and to $s_2$ with rank 2. Then, the rule assigns the voting weight of the voter completely to $s_2$, which would clearly contradict the intention of the voter. --- Rebuttal Comment 1.1: Title: Response Comment: I have the read the authors' response and I thank them for the detailed clarifications. I have no further questions.
null
null
null
null
null
null
Lie Point Symmetry and Physics-Informed Networks
Accept (poster)
Summary: This study proposes a new loss function for PINNs that imposes the symmetry of PDE. Strengths: PINN loss comes from the equation, but the proposed loss comes from the property of the equation. The idea is insightful. Weaknesses: The additional loss increases the computational cost. With the same computational budget, one can increase the number of evaluation points (N_r?) instead of introducing the proposed loss. The comparison might not be fair. The proposed method was only evaluated with the heat equation and Burgers' equation, which are very simple PDEs. Is the symmetry found in more practical and complicated PDEs? Then, is the proposed method useful? The symbols are not unified. In (7), N_l or N_r? Just after (13), N_l=300 is used for data-fit, but the number of points for data-fit is N_0 in (8). In Table 1, "1000" might be wrong. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: With the same computational cost, is the proposed method superior to the vanilla PINN? Is the symmetry found in more practical and complicated PDEs? Then, is the proposed method useful? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The proposed method is limited to PDEs with known symmetry. The generality is unknown. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful feedback! Below we answer their questions and comments. **R**: The additional loss increases the computational cost. With the same computational budget, one can increase the number of evaluation points ($N_r$?) **A**: This is true. We have included experiments comparing the effect of the number of evaluation points to that of symmetry loss. **R**: The proposed method was only evaluated with the heat equation and Burgers' equation, which are very simple PDEs. Is the symmetry found in more practical and complicated PDEs? Then, is the proposed method useful? **A**: While symmetries are present in more complicated PDEs and generally, one may expect more symmetries in higher dimensions, we had a hard time adapting existing boundary-conditioned PINN models to produce a reasonable performance in these settings so that we can then improve their results using symmetry loss. To see the effect of these symmetry constraints on larger and more complex problems, we believe the symmetry loss should be combined with further innovations in PINNs that can enable its application to high-dimensional PDEs. **R**: The symbols are not unified. In (7), $N_l$ or $N_r$? Just after (13), $N_l=300$ is used for data-fit, but the number of points for data-fit is $N_0$ in (8). **A**: Thanks for pointing out the typos! In (7) it should be $N_r$. $N_l$ is used to refer to samples from both initial and boundary conditions ($N_l = N_0 + N_b$). We will make the notation more consistent and clear in the revision. **R**: In Table 1, "1000" might be wrong. **A**: Yes, thanks! It should be 10000. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thank you for your response and additional experiments. As I am concerned, simply using much more points leads to a better result. While introducing symmetry is always effective, I would like to see curves of the computational cost vs performance for a fair comparison. In other words, with the same computational budget, is introducing the symmetry loss more effective than increasing the number of points? > Finally, a common question is about the automatic derivation of symmetries from the PDE. This is indeed possible – computational algebraic software can calculate symmetry groups for a given PDE. Symbolic programs in MACSYMA, REDUCE, MAPLE, and Mathematica have been developed to find the equations for the infinitesimal generators. It is not easy to agree on this point without a demonstration. If you could demonstrate your method with such a procedure, I would give it a better score.
Summary: In this paper, a method for finding solutions to differential equations that represent physical phenomena using neural networks is considered. In particular, a method that takes symmetry into account is proposed. Specifically, the authors consider an infinitesimal generator that represents the symmetry of the equation, and constrain the prolongation of the solutions to be orthogonal to the isosurface of the solution expressed in Jet space. This is expected to improve the accuracy of the solutions. Strengths: As far as I know, the use of the symmetry of the model as an infinitesimal generator, rather than in the form of conservation laws, is certainly new. This enables the use of symmetry for equations that are not derived from the variational principle. The proposed method would be reliable in the sense that the method has a theoretical basis. In addition, the paper is clearly written. Weaknesses: The computation and implementation of symmetries and the constraints on the jet space based on them are considered to be quite difficult. On the other hand, the improvement confirmed by the numerical experimentation is not significant, so the effect is limited compared to the difficulty of implementation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Can the computation of symmetry be automated by using computational algebraic software (e.g., Mathematica, Maple, Sigular)? (2) Although the improvement in numerical experiments is not significant for the cases considered in this paper, can it be effective for systems with many symmetries and conservation laws, such as integrable systems? (3) Perhaps I am misunderstanding something, but is it not sufficient to simply add some additional constraints? For example, does introducing the higher-order derivative of the loss function as additional cost functions (i.e. Sobolev learning) have a similar effect as the proposed method? I suppose that this can be used as an additional constraint because If the loss function is zero for all x and t, then the derivative of the loss function with respect to x and t should also be zero. In this way, it seems to me, it would be easy to create additional constraints without the laborious symmetry computations. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No potential negative societal impact is expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. Below we respond to their individual questions and comments. **R**: Can the computation of symmetry be automated by using computational algebraic software (e.g., Mathematica, Maple, Sigular)? **A**: Yes, computational algebraic software can be used to calculate symmetry groups for a given PDE. Symbolic programs in MACSYMA, REDUCE, MAPLE, and Mathematica have been developed to find the equations for the infinitesimal generators. For example, the document (not ours) in the link below is a guide for one such package. We thank the referee for pointing out the fact that we had not mentioned this, and we will clarify the draft. (https://docs.google.com/viewer?url=https%3A%2F%2Flibrary.wolfram.com%2Finfocenter%2FID%2F4231%2FYaLie.ps%3Ffile_id%3D3408) **R**: Although the improvement in numerical experiments is not significant for the cases considered in this paper, can it be effective for systems with many symmetries and conservation laws, such as integrable systems? **A**: Interesting point! Our hypothesis is that symmetry loss will lead to more improvement for a system with many symmetries. We have added an experiment (see the attached PDF) where we track the improvement due to symmetry loss as we use more symmetries for the heat equation. This experiment shows how increasing the number of infinitesimal generators used in calculating the symmetry loss leads to improved performance. This hypothesis is also confirmed when comparing heat to Burgers’ since we can see that the greater number of symmetries for the heat equation leads to greater improvement in the results. **R**: Is it not sufficient to simply add some additional constraints (e.g. higher order derivatives of the loss)? **A**: Additional constraints on higher derivatives of the PINN loss are very different from constraints on derivatives produced through symmetry loss. The former is changing the PINN loss (rather than the solution) for example making the PINN loss smoother, and we’re not sure how it will affect the performance. However, the symmetry loss is explicitly encouraging that certain infinitesimal transformations of the solution found by the network remain solutions to the PDE. We will add a discussion of these points to the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the experimental results on the relationship between the number of symmetries and the performance of the method. Based on these results, it seems that the proposed method is very effective for equations with a large number of symmetries and conservation laws, such as integrable PDEs. In particular, if the method can be combined with computational algebra so that a large number of symmetries can be easily, or automatically, handled, it can be a very powerful method. However, as the number of symmetries in the additional experiment is limited, there remains a concern that as the number of symmetries increases, the performance improvement may saturate. So I will keep my score this time; however, if additional experiments that show that the performance improvement would not saturate are provided, I am happy to increase the score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging the significance of the proposed method. Unfortunately, current PINN models that can take as input different initial conditions fail for higher dimensions. This is due to inherent difficulties in training PINN models, the cost of calculating gradients, and other gradient pathologies that have been studied previously [1]. Given these limitations, it isn’t feasible to showcase the performance of the proposed algorithm for high-dimensional or complex systems. We hope that further developments in PINN model will allow us to exploit their symmetries with the proposed algorithm. [1] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient pathologies in physics-informed neural networks, 2020b.
Summary: The work proposes a generic method to incorporate Lie point symmetry into physics-informed neural networks (PINNs) by augmenting loss function. The method leverages automatic differentiation as other PINNs do because the condition for symmetries is written using differentials. The authors demonstrated the model with symmetry gained better performance than one without symmetry. Strengths: * The work is based on an established mathematical theory found in Olver (1986). * Once the Lie point symmetries are obtained, it seems quite simple to incorporate them, thanks to automatic differentiation and the PINNs framework. Weaknesses: * The experiments look insufficient. They compared with and without symmetry but no other state-of-the-art PINN-based models. It makes the contribution of the work look limited because most of the mathematical contributions directly come from Olver (1986), and the results from the symmetry model are not good enough (in particular, Figure 2). They claimed that "our goal is to showcase the effectiveness of using symmetries" (lines 230-231). However, the positive effect of the symmetries is well-known to the community (e.g., Wang et al. 2021a). * The superiority of incorporating symmetries through loss function is unclear. I suggest the authors include data augmentation methods (e.g., Brandstetter et al. 2022a) and equivariant models (e.g., Wang et al. 2021a). * The authors claimed, "Our work presents the foundations for leveraging Lie point symmetry in a large family of Neural PDE solvers that do not require access to accurate simulations" (lines 316-318). However, another part says, "The data for Burgers' equation is obtained using the Fourier Spectral method" (line 299). It makes the reviewer confused about whether the method uses simulation data as supervisory signals. * The contribution of the paper is not clear enough. Is it a new theorem proved, an undiscovered observation found, a state-of-the-art performance, or a novel problem setting? The reviewer recommends the authors clearly state the contribution of the work and show the supporting facts. * The mathematical presentation lacks correctness and clarity. The reviewer found no clear definition of "Lie point symmetry." The definition of $u^{(n)}$ lacks $u$ (see Olver (1986) p.97). It may be by mistake; however, this is an essential part of the theory. Thus the reviewer recommends the authors carefully re-check the manuscript for more mathematical correctness. For instance, if there is no $u$ in $u^{(n)}$, one cannot express the advection term of the Burgers' equation in the jet space. Minor points: * Please add an explanation about $N_s$. * The paper uses "viscosity" for the heat equation, which seems uncommon. It could be called the diffusion coefficient. * In the first equation of Equation (1), $t$ should be defined in an open set (see, e.g., Jürgen Jost "Partial Differential Equations ThirdEdition" (2012) Chapter 1). That's why we need the initial condition. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: * How long do the training and prediction take, respectively? * What is the meaning of "orthogonality" in the paper, e.g. "our symmetry loss encourages the orthogonality of $\mathrm{pr}^{(n)}v$ and the gradient of $\Delta$" (line 237)? The condition contains PDE itself, so the reviewer guesses the use of the terminology could require some explanation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The limitations are stated sufficiently in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback! Below we respond to individual questions and comments. **R**: … It makes the contribution of the work look limited because most of the mathematical contributions directly come from Olver (1986), and the results from the symmetry model are not good enough (in particular, Figure 2). They claimed that "our goal is to showcase the effectiveness of using symmetries" (lines 230-231). However, the positive effect of the symmetries is well-known to the community (e.g., Wang et al. 2021a). **A**: While the theory of Lie point symmetry in PDE’s is an established area, our novel contribution is the methodology for incorporating this symmetry and, in particular, the design of the symmetry loss for PINNs. To our knowledge, our’s is the first work showing the effectiveness of using Lie point symmetries for PINN (Wang et al.22a, while quite relevant, is not concerned with PINNs). **R**: The superiority of incorporating symmetries through loss function is unclear. I suggest the authors include data augmentation methods (e.g., Brandstetter et al. 2022a) and equivariant models (e.g., Wang et al. 2021a). **A**: Data augmentation is not possible for PINN models as they are trained directly with the PDE equation – i.e., there is no ground truth training data in PINN to augment. Therefore, one could simply increase or “augment” the number of sampling points, and our experiments compare the effect of sample size and symmetry. To the best of our knowledge, there is no “equivariant model” for PINN. In general, we expect comparison to other neural solvers that are not PINN to significantly favour those solvers that rely on the underlying true PDE for learning. However, they also suffer from the same issue. **R**: The authors claimed, "Our work presents the foundations for leveraging Lie point symmetry in a large family of Neural PDE solvers that do not require access to accurate simulations" (lines 316-318). However, another part says, "The data for Burgers' equation is obtained using the Fourier Spectral method" (line 299). It makes the reviewer confused about whether the method uses simulation data as supervisory signals. **A**: Note that Tte data (solution) is generated only for evaluation purposes, not for training. For evaluation, we report the MSE with respect to the ground truth solution. We will make this point clear in the revision. **R**: The contribution of the paper is not clear enough. Is it a new theorem proved, an undiscovered observation found, a state-of-the-art performance, or a novel problem setting? The reviewer recommends the authors clearly state the contribution of the work and show the supporting facts. **A**: Main contributions are the following; we will make them explicit in the revision: Methodology to calculate the group action on the jet space of the PDE using automatic differentiation. Equation for a symmetry loss that enforces arbitrary Lie-point symmetry of the PDE in PINN models. Demonstrate the effectiveness of symmetry relative to increasing the number of sampling points in PINNs. **R**: Please add an explanation about N_s **A**: Sorry for this confusion in the notation. We will change N_s to N_0 to be consistent with the notation originally introduced. **R**: The paper uses "viscosity" for the heat equation, which seems uncommon. It could be called the diffusion coefficient. **A**: Yes, we will change this. **R**: In the first equation of Equation (1), t should be defined in an open set **A**: Thanks for pointing out this typo. It will be fixed in the revision. --- Rebuttal Comment 1.1: Comment: The reviewer appreciates the responses made by the authors. Now it is clear that the method uses no training data and incorporates symmetries behind PDE, which is new. However, the contribution of the paper does not seem sufficient to be accepted at the conference in terms of methodology and experimental significance. The method is a straightforward implementation of Olver (1986) into PINN, and empirical results showed no significance or only marginal improvement. Therefore, the reviewer keeps the score unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We emphasize that while developing the theory of symmetry groups of PDEs indeed exists in the mathematical literature, using them to enforce symmetries in neural PDE solvers is a novel contribution. More explicitly, the contributions are listed below, and we will include this list in the revision: 1. Methodology to calculate the group action on the jet space of the PDE using automatic differentiation. 2. Equation for a symmetry loss that enforces arbitrary Lie-point symmetry of the PDE in PINN models. 3. Demonstrate the effectiveness of symmetry relative to increasing the number of sampling points in PINNs.
Summary: This paper proposes to enhance Physics-Informed Neural Networks (PINNs) by incorporating local Lie-point symmetry into them. It is achieved by introducing an additional symmetry loss term, which requires analytic computation of the PDE’s symmetries. This loss term is designed to encourage orthogonality between the PDE equation and the different symmetry transformations. Evaluation is performed on 1D heat and Burgers equations.They show that the incorporation of symmetry gives better performance, especially when the number of sampled points is low. Being unfamiliar with Lie theory, I found the theoretical to be quite challenging. Overall, I found the paper interesting, but a bit short on the experimental side. Strengths: - The proposed approach is theoretically motivated, and leads to good performance compared to vanilla PINN - It doesn’t need many modifications to the vanilla PINN to implement Weaknesses: - The proposed method needs to compute the symmetries analytically first, before integrating them into the loss function. Would there be a way to automate this part? - In the considered examples, the better performance obtained with the additional loss terms could have also been achieved by denser sampling, as has been noted. What Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How does the number of symmetry constraints scale wrt to the dimensionality of the considered PDE? - Could this symmetry loss term be incorporated to other PINN variants too? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - The experiments are only showcased on 1D PDEs Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback! Below we respond to their questions and comments. **R**: The proposed method needs to compute the symmetries analytically first, before integrating them into the loss function. Would there be a way to automate this part? **A**: Yes, computational algebraic software can be used to calculate symmetry groups for a given PDE. Symbolic programs in MACSYMA, REDUCE, MAPLE, and Mathematica have been developed to find the determining equations for the infinitesimal generators. For example, the document (not ours) in the link below is a guide for one such package. (https://docs.google.com/viewer?url=https%3A%2F%2Flibrary.wolfram.com%2Finfocenter%2FID%2F4231%2FYaLie.ps%3Ffile_id%3D3408) **R**: In the considered examples, the better performance obtained with the additional loss terms could have also been achieved by denser sampling, as has been noted. What **A**: We agree that increasing sample size, similar to equivariance/symmetry, leads to better generalization. We show the effect of both symmetry and increased sample size for both heat and Burgers' equations. Please note that the question was incomplete. **R**: How does the number of symmetry constraints scale wrt to the dimensionality of the considered PDE? **A**: The number of symmetry constraints corresponds to the number of infinitesimal generators of the symmetry group of the PDE. While this number depends on the PDE equation itself, one generally expects that in higher dimensional PDE, the existence of certain symmetries (e.g., Euclidean symmetry) leads to an increased number of these infinitesimal generators. **R**: Could this symmetry loss term be incorporated to other PINN variants too? **A**: Yes, this loss term can be used in any PINN model. As an example, we have added an experiment for the Heat equation with a modified MLP model introduced in [1]. We also tried the causal training suggested in [2]. However, in the operator learning setting that we were trying (i.e. using DeepONet architecture to handle different initial conditions), we did not get improvement to the results as only the models trained with a small $\epsilon$ in the PDE equation ( slope of temporal weights) led to reasonable predictions. [1] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient pathologies in physics-informed neural networks, 2020b. [2] Sifan Wang, Shyam Sankaran, & Paris Perdikaris. (2022). Respecting causality is all you need for training physics-informed neural networks. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. However, due to the limited numerical experiments, I will keep my rating
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful and constructive feedback! We are happy to see that they found the idea to be motivated (R-qo6e, R-vAL8), insightful (R-KJvK) and novel (R-vAL8, R-5AKh), the empirical results to be compelling (R-vAL8), and the presentations of the paper to be clear (R-5AKh). Below we address individual questions and concerns raised in the reviews. One common concern was in using alternative baselines. In the attached PDF, Table 4 presents new results comparing the effect of symmetry for different sample sizes using another architecture for PINN that uses a gating mechanism [1]. We do not observe any significant improvements over the vanilla MLP with or without symmetry loss. However, we do observe consistent improvement with the addition of symmetry loss to the PINN loss, especially in a low data regime. We also experimented with Causal PINN [2], but were did not see any improvements with or without symmetry loss over vanilla PINN loss. Another suggested baseline is data augmentation; Since PINN model is not trained on any dataset, we cannot perform data augmentation. One could simply increase the number of training points, and we do have experiments comparing the effect of symmetry loss with that of increasing data points (note that we cannot even “transform” the training points to perform augmentation since the symmetry also acts on the unknown dependent variable.) Yet another family of suggested baselines is Neural Operator-based methods. We expect Neural Operators to generally outperform PINNs, due to their advantage of training on exact solutions. Note that all other symmetry-based neural solvers cited by the reviewers (and in our paper), for example [3], are either Neural Operators or otherwise require exact solutions for training; therefore, they are not comparable to our PINN-based approach. We are unaware of any symmetry-based improvements for PINN. Another common question was regarding the effect of the number of PDE symmetries on the effectiveness of symmetry loss. Table 3 reports the result of a new ablation, in which we increase the number of symmetries of the same equation used in the symmetry loss. We observe consistent improvement as we incorporate more symmetries. Some reviewers have raised concerns about the technical difficulty of the material, as it affects readability. We appreciate this difficulty, and our solution to make the content palatable to ML community is to make the paper as self-contained as possible by providing a comprehensive (less than 3 pages) background on Lie point symmetries and using simple running examples and figures. However, some level of difficulty remains inevitable due to the technical nature of the topic. Finally, a common question is about the automatic derivation of symmetries from the PDE. This is indeed possible – computational algebraic software can calculate symmetry groups for a given PDE. Symbolic programs in MACSYMA, REDUCE, MAPLE, and Mathematica have been developed to find the equations for the infinitesimal generators. We will clarify these points and include the new results in the revision. [1] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient pathologies in physics-informed neural networks, 2020b. [2] Sifan Wang, Shyam Sankaran, & Paris Perdikaris. (2022). Respecting causality is all you need for training physics-informed neural networks. [3] Rui Wang, Robin Walters, and Rose Yu. Incorporating symmetry into deep dynamics models for improved generalization, 2021a. Pdf: /pdf/c0a26303e026203b8b78820f21597088db7d7d96.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes adding a custom loss function to training physics-informed neural networks (PINNs) to force symmetry requirements when learning solutions to PDEs. A given partial differential equation (PDE) is associated with a Lie group that acts on the space of solutions of the PDE that leaves this space invariant. The custom loss penalizes non-orthogonality between the gradient from the PDE and the symmetry constraint. This forces the updates to the solution to be on the level-set of group invariant solution. The loss is computed using the action of the Lie algebra generators on the jet space of the solutions. The authors show that the additional loss leads to better data efficiency and accuracy in two test case of Heat Equation and Burgers’ Equation. Strengths: - The idea of adding symmetry constraints to neural networks PDE solvers is a very interesting contribution to the field. - The execution is novel and makes clever use of Lie algebra theory on PDEs. - The exact form of the loss function is clearly motivated. - The experimental results are compelling and convincing. Weaknesses: - The paper would benefit significantly from a more thorough ablation study. Specifically, an exploration of the effects of incrementally adjusting the relative weighting of the different loss terms may offer crucial insights. It would be nice to analyse the data efficieny in a power low for example, as a function of the loss weights. - The discussion surrounding the two types of loss used in this study (cosine and orthogonality) is not fully clear. While the authors do mention these losses, there is insufficient detail regarding their selection,. - The mathematical notations of the paper are not clear. Overall, I've found the paper quite hard to parse. For example, the notations $\text{pr}^{(n)}[\Delta]$ is not really explained and I could only guess what it means. The Lie algebras are introduced suddently while not referred before. Please re write the introduction of projector using the Lie algebras. - There is an absence of any comparison with baseline models using data augmentation techniques or fully equivariant architecture in term of numerical results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - See my point on the power law before. I would plot different power laws based on different triplets of loss weights and show the coefficients. - Please compare to other baselines (including neural network operators and other PINNs) regarding numerical values. There are better ways to compare methods than eyeballing the graphs in Fig. 2 and Fig. 3. In particular because the errors might be very frame dependent. I would give a numerical error by averaging several runs via several seeds over several frames. In the current form, I am unable to fully appreciate the performance to the method. If this point is addressed correctly, I am open to increasing my mark. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Below we respond to the questions and concerns raised in the review. **R**: The paper would benefit significantly from a more thorough ablation study. Specifically, an exploration of the effects of incrementally adjusting the relative weighting of the different loss terms may offer crucial insights. It would be nice to analyze the data efficiency in a power low, for example, as a function of the loss weights. **A**: We agree with the reviewer that more experiments can always be more helpful. We will elaborate on the following empirical observation that suggests the proposed ablation on loss terms may not add much: We observe that for different problems using the same ratio for PINN and symmetry loss performs well, and further adjustment leads to little change in performance. In contrast, the initial condition matching loss is sensitive (as also seen in prior works on PINN), and therefore we treat it as a hyper-parameter. We believe the suggested ablation will reflect these findings, but we are open to running the experiments if the reviewer still finds it useful. **R**: There is an absence of any comparison with baseline models using data augmentation techniques or fully equivariant architecture in terms of numerical results. **A**: Since PINN models are not trained on any dataset, one cannot compare them to data augmentation. This is in contrast to neural operator methods, where data augmentation makes sense. The closest one could get to data augmentation is to add points to PINN loss corresponding to symmetry transformations. But since the value of the dependent variable is unobserved, it cannot be transformed, and the whole scheme simply involves increasing the points for the PINN loss. We do have such experiments. Additionally, to the best of our knowledge, there is no equivariant architecture for PINN model, in part due to the form of the action of the Lie point symmetries on the PDE. **R**: The mathematical notations of the paper are not clear. Overall, I've found the paper quite hard to parse. For example, the notation is not really explained, and I could only guess what it means. The Lie algebras are introduced suddently while not referred before. Please re write the introduction of projector using the Lie algebras. **A**: We will clarify the specific examples pointed out by the reviewer. The topic is mathematically challenging, and we use a significant portion of the background section introducing many ideas with a running example. **R**: Please compare to other baselines (including neural network operators and other PINNs) **A**: Due to access to ground truth data during training, Neural operators generally perform better than PINNs. Since we are improving PINN using symmetry loss, the vanilla PINN model seems like the right baseline. Our objective here is to show that the PINN model at large can be improved through the use of symmetries. To further strengthen the point, we performed new experiments comparing the variant of [X] with and without symmetry. Here again, we see the positive effect of symmetry loss, although we do not see an improvement in general performance compared to vanilla MLP. **R**: regarding numerical values. [...] In particular because the errors might be very frame dependent. I would give a numerical error by averaging several runs via several seeds over several frames. [...] If this point is addressed correctly, I am open to increasing my mark. **A**: Results in Tables 1 and 2 are consistent with the reviewer’s suggestions. They are produced by averaging the error over multiple initial conditions, which is the main source of variance. The addition of multiple seeds for the same initial condition does not significantly change the results. Images are only provided for qualitative comparison. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. ### On the ablation When a new method includes free parameters, it is essential to discuss how they have been chosen and what values were settled upon, even if the impact is found to be minimal. I was unable to locate the final values of each coefficient clearly. You refer to Appendix C (line 275) in the text, but this does not cover that specific detail. Since this new loss is the main contribution of the paper, I am requesting greater clarity on this matter. ### On tables I acknowledge Tables 1 and 2, but I find that there is no clear description of what they exactly represent. I have not seen any explanation of how the test set was selected, so I cannot precisely determine what these numbers mean. I encourage the authors to provide a more rigorous description of their experiments and datasets in the Appendix. ### On the presentation I recognize the challenge of presenting a new (for most of the machine learning community) mathematical concept within the constraints of a short paper. However, complex material can be well presented by being succinct and consistent with notations, something that is not always evident in your paper. For example, you begin by introducing the parametrization of Lie groups by one-parameter subgroups, then proceed to discuss the infinitesimal generators of these subgroups. Later in the results, you start using the term "Lie algebras" without having previously mentioned it. It would be more coherent to introduce cleanly what a Lie algebra is, explain the exponential map, and maintain consistency thereafter. In another instance, you first introduce the prolongation of the action of the Lie group on the jet space and then use the same notation to denote the prolongation of the action of the Lie algebra. This could lead to confusion, and I suggest making the notation and explanation more transparent. Overall I will keep my score and encourage the authors to improve the consistency and clarity of their exposition. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their answers and especially for pointing out how the presentation of the paper can be improved! **On the ablation:** We performed a hyperparameter search for the coefficients of the loss terms for both models. We found that, similar to PINNs, the model is sensitive to the weight given to the supervised loss for the initial conditions vs the symmetry/PINN loss. The specific coefficient values for the model trained with symmetry loss are: $\beta$=20, $\gamma$=100 for Nr= 500 $\beta$=20, $\gamma$=80 for Nr= 2000 $\beta$=20, $\gamma$=40 for Nr= 10000 The specific coefficient values for the model trained without symmetry loss are: $\alpha$=150, $\beta$=20 for Nr= 500 $\alpha$=150, $\beta$=20 for Nr= 2000 $\alpha$=130, $\beta$=20 for Nr= 10000 We will make sure to include these in the appendix. **On tables:** We have referred to the tables in the Results section for both of the experiments and described their significance, mentioning how they provide evidence that symmetry loss is most useful in low-data regimes. We failed to mention that the test set of 300 initial conditions was selected randomly from the full dataset, and we will mention this in the revision. **On the presentation:** We appreciate the reviewer’s constructive criticisms, especially on how to improve the quality of the presentation of the paper. They will serve to improve the overall quality of our paper in the revision. We believe addressing the two points raised is rather straightforward and would not require a major change.
null
null
null
null
null
null
Hybrid Search for Efficient Planning with Completeness Guarantees
Accept (poster)
Summary: In this paper, the authors propose a hybrid technique to speed up the planning tasks. The novelty is that the completeness is guaranteed. I agree that guaranteeing completeness is a good property for a learning-based algorithm. Strengths: 1. The paper is well-written. 2. The novelty of the contribution is moderate. 3. It seems that the algorithm tried to find a balance between classic algorithms (complete but slow) and learning-based algorithms (fast but non-complete). The simulated results look good. Weaknesses: 1. I would say that the words "hybrid search" is not a suitable expression for the proposed algorithm. There have been too many "hybrid" algorithms. Currently I haven't come up with a suggestion, but the authors can think about it. 2. Can the author present the problem to be solved in a formal environment? I mean "Problem 1: (xxx) ...". Currently the presentation is only friendly to expert. (Fig.1 does a good job.) 3. I suggest the authors implement classic algorithms and show their performance. In my opinion, they are complete and optimal, but slow. By doing this the author can show the quality of the solution generated by the proposed algorithm. 4. The algorithms should be tested in larger simulated environments. 5. I think the related work section can be re-structured. Currently it just listed three kinds of searching algorithms, without mentioning their connection to this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comments in the "weakness" block. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the paper has presented the limitation and future works properly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and questions. >I would say that the words "hybrid search" is not a suitable expression for the proposed algorithm. There have been too many "hybrid" algorithms. Currently I haven't come up with a suggestion, but the authors can think about it. Thank you for the comment. A more concrete alternative to hybrid search is a high-level search augmented with low-level actions. To make it easy for the potential reader, we introduce our main idea and present the name hybrid search already in the abstract. >Can the author present the problem to be solved in a formal environment? I mean "Problem 1: (xxx) ...". Currently the presentation is only friendly to expert. (Fig.1 does a good job.) We will add a more formal presentation of the problem, preferably at the beginning of Section 3, depending on the page limit constraints. >I suggest the authors implement classic algorithms and show their performance. In my opinion, they are complete and optimal, but slow. By doing this the author can show the quality of the solution generated by the proposed algorithm. We performed this analysis. Please see Table 12 in the common response pdf. Generally, we indeed note that HIPS-$\varepsilon$ greatly outperforms the classical planning algorithms in terms of node expansions, even if we allow classical planning algorithms to use heuristics defined using prior knowledge, which HIPS-$\varepsilon$ does not have access to. >The algorithms should be tested in larger simulated environments. We think that the environments used are already challenging and require significant computing power, as evidenced by the relative failure of classical planning methods at solving the problems (see Table 12 in the common response pdf). To further increase the difficulty, we perform the OOD experiments with Box-World (in the paper) and Sokoban (Figure 6 in the common response pdf). Also, the 5x5 sliding tile puzzle environment is considered challenging in the classical planning literature [1] (page 71). >I think the related work section can be re-structured. Currently it just listed three kinds of searching algorithms, without mentioning their connection to this paper. Thank you for this comment. The first class of algorithms is hierarchical planning methods, but they rely on numerical optimization or are not suited to solving difficult discrete problems as used for evaluation in this work due to lack of search, capability to generate exact subgoals, or ability to plan. The second class of algorithms represents an orthogonal direction for improving hierarchical planning algorithms, and successfully combining these algorithms with our work could lead to even stronger agents, which is a subject for further work. The third class of algorithms is the ones most closely related to our work that our work builds on and that we use as baselines for evaluating our method. We will incorporate this information in the related work section to clarify the connection to prior work. [1] Russell, S. & Norvig, P. Artificial Intelligence: A Modern Approach. 3rd edition. --- Rebuttal Comment 1.1: Comment: First I would like to thank the authors for their comments. Please see my response to the authors' rebuttal. 1. "hybrid-search" issue: Sorry, I don't think "high-level search augmented with low-level actions" is an appropriate naming of the algorithm. And it is neither elegant to explain the paper title in abstract (then the title is useless). Can the author think more? 2. "experiment" issue: I still don't think the experimental settings are challenging given my knowledge in classic planning (now it is 2023!). I noticed that this issue was also raised by other reviewers. 3. "related work" issue: The authors tried to categorise and differ the existing algorithms, which is good. However, I hope the authors can explain more on this. For example, in the authors' response it is stated that the first class of algorithms cannot generate exact subgoal (what does it mean?), or ability to plan (but A* was presented in this category in the manuscript, isn't it?). In fact, after re-reading the manuscript, I think the arrangement of the related work section is not correct: It describes three classes of algorithms with different functionalities, instead of describing three classes of algorithm that tried to solve THE problem, "efficient planning on grids". --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments and questions >"hybrid-search" issue: Sorry, I don't think "high-level search augmented with low-level actions" is an appropriate naming of the algorithm. And it is neither elegant to explain the paper title in abstract (then the title is useless). Can the author think more? We believe that the name Complete Subgoal Search (CSS/CSubS) would suit the approach. It highlights that the primary driver of the search is the subgoals, but the search is also made complete by including the low-level actions. Complete Subgoal Search with Low-level Actions, or CSSLA is also an alternative that includes the low-level actions in the name of the proposed search approach. We thank the reviewer for the suggestion and are happy to ponder this further. We are surely interested in using the best possible title. > "experiment" issue: I still don't think the experimental settings are challenging given my knowledge in classic planning (now it is 2023!). I noticed that this issue was also raised by other reviewers. We selected the experimental setup to validate the claims made in our paper. We have evaluated Complete Subgoal Search on all problems used in the original HIPS paper and showed that our method either outperforms or is on par with the baseline HIPS on each benchmark. Additionally, we have performed difficult OOD experiments to evaluate the transferability of our method. The results show significant improvements in comparison with prior work. Furthermore, we have compared the performance of our method to classical planning approaches, analyzed the sensitivity of our framework to the value of $\varepsilon$, and performed additional experiments to understand how the value of $\varepsilon$ affects the search behavior. We are happy to consider running additional experiments proposed by the reviewer if any of the claims in our paper is not validated by our experimental results. > "related work" issue: The authors tried to categorise and differ the existing algorithms, which is good. However, I hope the authors can explain more on this. For example, in the authors' response it is stated that the first class of algorithms cannot generate exact subgoal (what does it mean?), or ability to plan (but A* was presented in this category in the manuscript, isn't it?). In fact, after re-reading the manuscript, I think the arrangement of the related work section is not correct: It describes three classes of algorithms with different functionalities, instead of describing three classes of algorithms that tried to solve THE problem, "efficient planning on grids". By referring to an inability to generate exact subgoals, the models either do not generate subgoals at all or generate subgoals in the latent state, which may not correspond to valid or reachable states, which is an issue in environments such as Sokoban that require precisely correct actions to solve the tasks. Referring to a lack of ability to plan was an oversight on our part. Thank you for pointing that out. We will replace that with “do not plan with an explicit search,” as CEM-based methods plan but do not perform a systematic search. We note that methods without search struggle to solve the benchmarks used in our work (see the original HIPS paper and Table 2). Furthermore, A* is indeed an algorithm that does have the ability to plan, but it is not a hierarchical algorithm. Note that we discussed A* and Dijkstra in the context of hierarchical planning algorithms that use them as subcomponents for search in a continuous or visual setting, which is different from ours. All in all, in the related work section, we have mostly presented and compared our approach with other hierarchical planning approaches suited for solving various families of problems, as we consider those algorithms to be the most closely related to ours from a methodological standpoint, even though the problems they solve are somewhat different with the exception of kSubS, AdaSubS, and HIPS. We are also happy to take suggestions if the reviewer believes we have missed some relevant line of work that should be discussed in our paper. Furthermore, to make the related work section easier to read we will add a short introduction to the beginning of the related work section that explains how we selected the different method categories and how the discussed methods relate to our approach.
Summary: The paper introduces a method, called hybrid search (or HIPS-\epsilon) that combines high-level planning (with subgoals generated by a learned model) with low-level search. Subgoals allow for more efficient search, but existing subgoal-based methods are prone to errors, which can lead to failures in finding solutions, even if the solution exists. Low-level search gives the completeness guarantee but is usually much less efficient. Hybrid search is a novel and efficient way of combining the advantages of both approaches. The method was tested on 4 different tasks, and various analyses show its excellent performance. Strengths: The motivation for this paper is very clear and it precisely formulates the problem it is going to solve. The introduction section is well written with clear references to other works in the field. Baselines are chosen accurately. There are also a lot of insightful comments about the comparison of hybrid search to baselines. There is a broad evaluation of the method: success rate across all envs, epsilon-hyperparameter analysis, unsolved puzzles ratio, etc. Figure 3 is very insightful. I agree with the authors that OOD generalization is a very attractive property of the method. Hybrid search is explained in a clear way and, most importantly, the meaning of epsilon is easy to understand. The technical improvement over HIPS is simple, it mainly consists of two contributions: modification of policy, and modification of search heuristics. I consider it a huge advantage that a simple technique gives excellent results. The quality of the text is high, also I really liked all the comments. The main result, that is, achieving completeness without loss of efficiency, is excellent. Weaknesses: Table 1: there is no information on how many problem instances the method was tested. Are all results statistically significant? I know that there is no place in this table to put all error bars, but in the table caption, at least the average error (or maximum error) should be mentioned. I have looked into the supplementary materials to check the error estimates and I am not convinced by the results for TSP. For n=20, 50, and 100 the difference between HIPS and HIPS-epsilon is much smaller than the error estimates. Could you run the test on more instances to reduce the error or provide some argument why the results for TSP are meaningful? Figure 1. does not serve its role in illustrating the method. A reader wants to look at the image and quickly see the difference between the hybrid search and the other methods. The whole figure does not help in it. It illustrates only some flaws of other methods. I was unable to get any idea what a hybrid search is about just by looking at Figure 1. I think that a much better Figure can be created instead. Figure 3. Lines for epsilon=1e-5 are missing. You use that value in two problems, so we would like to see how it behaves on this graph. There is no information about the size of the dataset used for offline training. OOD Generalization should be more elaborated. It is a very interesting result. I would like to see more results, for example, a table similar to Table 1, Table 2, or Figure 3. Please consider this a minor weakness, I don't see any flaw in the part of the paper about OOD, I just want to say that the paper could benefit much from getting more results like this. The success rate is clearly greate Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: 1. Why the error estimates for the success rate on the TSP problem are so high compared to the other tasks? 2. How sample-efficient is the training of hybrid search? Is there any way to estimate the number of samples needed to train the method in a new env? 3. How the performance of hybrid search depends on the size of the training dataset? 4. Clearly TSP is an outlier: the results on this task look different (I mean: Figure 3, Table 7 in supplementary materials or lines 261-262). Do you know why it is so? What is so specific about TSP? 5. Do you have results similar to Table 3 but for other problems? The most interesting here is the average number of nodes of expansion. 6. A typical solution produced by HIPS-epsilon was constructed from some number of high-level actions and some number of low-level actions. What is the ratio of those? How is depends on epsilon? 7. What happens if you run HIPS-epsilon on a problem with no solution (e.g. unsolvable Sokoban board)? For example, the state space in Sokoban is finite. Could HIPS be used to classify if the solution exists? Sokoban is an interesting example since deciding if a given board is solvable is NP-hard problem. 8. Was AdaSubS tuned for the experiments? Its performance strongly depends on the chosen hyperparameters. lines: 269-272: was any component of HIPS or HIPS-epsilon trained after modification of the dataset? I suppose that no, just want to be sure. Suggestions: I think you should mention the tasks on which the method is evaluated both in the abstract and introduction. It is important for the reader. In Section 3.1 some comment about the meaning of the heuristic factor is missing. For a reader who is not familiar with PHS it may be hard to quickly get how it depends on h(n) and we need the heuristic factor at all. I know that this can be found in the cited papers. I only suggest adding a footnote or comment with some motivation or explanation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations were mentioned in Section 5, but in my opinion not all. All problems used for testing HIPS-epsilon have compact state representation, finite action space (there are problems with discrete yet infinite action space). Also, HIPS-epsilon is useful only on problems for which the solution exists (it is not a problem, but should be mentioned). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and questions. >Are all results statistically significant? >Could you run the test on more instances to reduce the error or provide some argument why the results for TSP are meaningful? We will incorporate the information about statistical significance in the main text. At the moment, the statistical significance of the results has been computed in the supplementary material. The results are statistically significant in almost all cases except for TSP. Note that for TSP, we expect HIPS-$\varepsilon$ and HIPS to be equal given that HIPS already attains a 100 % success rate and low-level expansions are never performed. The difference is not expected to be statistically significant. >Figure 1. does not serve its role in illustrating the method. We will do our best to improve the figure for the final version of the paper by clearly marking and annotating the figure, illustrating the search queue, and reducing the number of nodes, in case the paper is accepted. > Figure 3. Lines for epsilon=1e-5 are missing. You use that value in two problems, so we would like to see how it behaves on this graph. The results for epsilon = 10^-5 are very close to those of epsilon = 10^-3, so we chose to omit 10^-5 for clarity. We will include a figure with epsilon = 10^-5 in the final version of the paper if accepted. > There is no information about the size of the dataset used for offline training. We use the same dataset as in [1], that is, 10340 trajectories in Sokoban, 5100 in STP, 22100 in BW, and unlimited but extremely low-quality trajectories in TSP. >OOD Generalization should be more elaborated. We added a similar experiment on Sokoban (see Figure 6 in the common PDF), and add numerical results in the appendixes (due to lack of space). > Why the error estimates for the success rate on the TSP problem are so high compared to the other tasks? The performance of HIPS-$\varepsilon$ in TSP is more variable than in other environments. We hypothesize that it is due to the environment being particularly sensitive to the successfulness of the segmentation (does the detector agent always learn to take every city visit as a subgoal). >Sample efficiency In this work, we used datasets of fixed sizes, and HIPS-epsilon outperformed the baselines for all sizes, but we do not unfortunately have any general estimates. However, note that the sample efficiency of the overall method depends on the chosen subgoal search approach, and in particular, the generative model (in this case, HIPS, and VQVAE). Generally, our results indicate that the hybrid search proposed by us will improve the sample efficiency of the underlying subgoal search (that is, HIPS-$\varepsilon$ will be better in this respect than HIPS), as it will enable the agent to deal with the imprecisions of the low-level policy and generative model, similarly as in the OOD experiments. Table 3 in our submission indicates that the sample efficiency can improve even if the problem is solvable with subgoal search. >What is so specific about TSP? The main difference is that HIPS already solves 100 % of the environments, so there is no room for improvement. In general, as mentioned above, we think that the environment makes successful segmentation particularly important, and there is some variance in terms of the performance of the detector agent trained with REINFORCE. > Do you have results similar to Table 3 but for other problems? The most interesting here is the average number of nodes of expansion. We do not unfortunately have the same results for other problems, as we sampled the evaluation environments in the environments. However, in Box-World, the average number of expansions is 3.56 vs 3.82 in favor of HIPS-$\varepsilon$ (p<0.001) for all solved problems. In Sokoban and TSP, we would expect the averages to be approximately equal for problems solved by both methods (given that $\varepsilon \to 0$ seems to work the best). > A typical solution produced by HIPS-epsilon was constructed from some number of high-level actions and some number of low-level actions. What is the ratio of those? How is depends on epsilon? Please see Figure 7 in the common pdf. High-level actions dominate the solutions, with individual low-level actions being used (between 15 % and 0.2 % in our experiments), and their share depends on the value of epsilon such that higher epsilon gives more low-level actions, as expected. >What happens if you run HIPS-epsilon on a problem with no solution (e.g. unsolvable Sokoban board)? For example, the state space in Sokoban is finite. Could HIPS be used to classify if the solution exists? Sokoban is an interesting example since deciding if a given board is solvable is NP-hard problem. Unfortunately not by any other way except by running the search until the search queue empties. This is an interesting topic for further work. >Was AdaSubS tuned for the experiments? To save computational resources, we copied the results for AdaSubS from [1], who did not elaborate in their paper on how the AdaSubS hyperparameters were chosen. In this paper, we incorporated our search approach into HIPS yielding HIPS-$\varepsilon$. A further interesting evaluation could be incorporating hybrid search into AdaSubS yielding AdaSubS-$\varepsilon$. In this case, we could also check hyperparameter optimization for AdaSubS. >lines: 269-272: was any component of HIPS or HIPS-epsilon trained after modification of the dataset? I suppose that no, just want to be sure. No, no retraining was performed. >Suggestions: >Some limitations were mentioned in Section 5, but in my opinion not all. Thank you for the suggestions and pointing out limitations, we will try to make them fit in the main text while staying within the constraints of the page limits. [1] Kujanpää, K., Pajarinen, J., & Ilin, A. (2023). Hierarchical Imitation Learning with Vector Quantized Models. ICML 2023. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for all the answers.
Summary: The authors present an idea of enriching a classical hierarchical search pipeline with an exhaustive low-level search. This approach guarantees the completeness of the search and offers practical advantages, including slightly better success rates in the tested environments and stronger out-of-distribution evaluation properties. The method is built on top of the HIPS algorithm. The paper provides technical adjustments to the theory of the original HIPS that cover the hybrid approach. Strengths: The paper is mostly clear and well-written. The main idea is intuitive and the reported experiments support it. The OOD application seems very promising to me. Weaknesses: The novelty of the approach is limited, although the paper may still be a good contribution. I am not convinced that the completeness is a major concern itself, the paper lacks a clear justification for that. The impact of tuning the most important parameter $\varepsilon$ on the number of low-level expansions should be discussed. The key parameters used for evaluating presented methods should be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In general, I think that the paper is solid and well-presented. The novelty is limited, as the main idea is a formalized study of ideas hinted in some previous papers, which is an extension of known algorithms. Nevertheless, I think it may be a good contribution since the described approach is simple, yet offers some clear advantages. That said, I would like a few clarifications. See my comments below. l.15: You claim that your approach guarantees completeness, which is clear. However, it is not clear to me why should we care so much about completeness. I suggest adding to the paper (introduction perhaps) a short justification of the necessity of having the completeness property, as you claim it to be your main advantage. l.92 How much data do you use for training in a single environment? How long do you train each of the networks? l.141: I understand that the value of $\varepsilon$ corresponds to the density of using low-level actions. But I would like to understand how the specific values (importantly, those used in experiments) correspond to the number of low-level expansions performed during the search. For instance, does setting $\varepsilon=10^{-5}$ correspond to roughly one expansion every $10^5$ steps? I don't think so, because it would have quite a negligible impact on performance. In l.212 you claim that a low value of $\varepsilon$ is generally preferred, but I don't know how it exactly relates to the search itself. l.173: Please refer to the exact place in the appendix. Table 1: Is the $\infty$ budget a theoretical bound (like we're sure that given $10^{20}$ iterations low-level search would solve TSP optimally), or did you simply run the methods for a very large number of steps? Please state it clearly. Table 1: Since setting $\varepsilon\to 0$ essentially means that you perform exhaustive expansion in case the search would otherwise fail, why does your HIPS-e achieve (a little) worse results than HIPS in TSP? Table 1: Please provide the values of the main hyperparameters used for evaluating all the approaches presented in Table 1. Did you tune any values yourself? The results of the AdaSubS baseline on Sliding Tile Puzzle and TSP seem quite low compared to kSubS. As far as I understand, it is a generalization of kSubS, so why is it so much worse? l.259: Arguably the simplest approach to improving the completeness is to increase the number of subgoals generated at each node expansion. I wonder how do your empirical results relate to tuning that parameter? In particular, please specify the values that you use. l.266 I really like the OOD idea. Intuitively, it seems clear that augmenting the search with reliable low-level expansions is helpful in case the generator struggle in unknown domains. I think it deserves analyzing it in more detail. Did you observe similar patterns in other environments? What budget did you use for the reported results? Or are these results _theoretical_ (i.e. HIPS-e is _guaranteed_ to solve everything eventually) (in which case you should also remark it)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are discussed. The negative societal impact is not a concern here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and questions. >I am not convinced that the completeness is a major concern itself. > > l.15 Why should we care so much about completeness We see completeness as a worthwhile problem to tackle due to four main reasons: 1. Without completeness, we do not know whether a solution will be found when the algorithm is executed. This is critical for both theoretical science but also for practical algorithms. 2. Completeness is the key to the promising OOD capabilities shown by our agent, as completeness guarantees that the solution to the tasks will be found, even if the generative model or low-level policy is imperfect. Completeness allows extensions and new incremental algorithms that require building on top of exact solutions. One example of that could be curriculum learning. 3. Achieving completeness makes applying high-level search as an alternative to low-level search possible in safety-critical real-world systems. 4. Without completeness, comparing different high-level search algorithms requires a somewhat arbitrary balancing of efficiency and solution percentage. We will include this discussion in the final version. > The impact of tuning epsilon > > l.141: How the specific values [of epsilon] correspond to the number of low-level expansions We performed an additional study on STP and analyzed the impact of $\varepsilon$ on the number of low-level expansions performed during the search and low-level actions included in the discovered solutions. Please see Figure 7 in the global response PDF for the results. As expected, the relative share of low-level expansions and actions decreases as we decrease epsilon, and the function is monotone. Note that $\varepsilon$ only affects the probability assigned to the node ($\pi(n)$ in Eq 4 of the paper), but the node evaluation function also depends on the node’s depth and low-level distance from the root, and the value of the learned heuristic function. Hence, there is no deterministic correspondence between the value of epsilon and the share of low-level expansions. Nevertheless, we observe that for $\varepsilon = 0.5$, the share of low-level expansions is slightly over 40 %, so there is a rough correspondence for larger values of $\varepsilon$. In line 212, we hypothesize that a low value of $\varepsilon$ should, in most cases, lead to a more efficient search due to high-level actions being used more often, but it has a worse worst case-performance. > Key parameters used for evaluating presented methods. We will add the hyperparameters used for evaluating HIPS and HIPS-epsilon to the final version. Note that for AdaSubS, kSubS, and other baselines, we copied the results from [1] (lines 238-239 in the submission) to save computational resources. We used the same hyperparameters for HIPS as in [1]. The only hyperparameter that we tuned for HIPS-$\varepsilon$ was the value of $\varepsilon$. AdaSubS has a learned low-level policy, whereas kSubS uses a low-level search. Therefore, AdaSubS solves a more difficult problem than kSubS (and equally difficult as HIPS-$\varepsilon$). The authors of [1] report that AdaSubS struggles to reliably reach the generated subgoals with the learned low-level policy, which would explain the results. > l.92 Training data and procedure We use the same dataset as in [1], that is, 10340 trajectories in Sokoban, 5100 in STP, 22100 in BW, and unlimited but extremely low-quality trajectories in TSP. The datasets used in [1] also contain a validation set. We used the validation loss for early stopping. > l.173: Exact place in the appendix. Appendix B, we will add it to the text. > Table 1. Is the budget a theoretical bound For HIPS-$\varepsilon$, a budget of 10,000 expansions was sufficient to solve all the problem instances in our experiments in Table 1. For Sokoban and STP, we ran a PHS* low-level search for all problems until all solutions were discovered. For BW and TSP, we had a computation budget of 128,000 expansions, and we needed to interrupt the PHS* evaluation runs before a 100 % solution rate was attained. >Table 1: Why does your HIPS-e achieve (a little) worse results than HIPS in TSP? The difference between HIPS and HIPS-$\varepsilon$ in TSP is just noise (please check the complete results in Appendix E, Table 7). Thank you for raising this question, we will clarify it in the main text. > l. 259. Number of subgoals Unfortunately, we did not have the computational resources for an in-depth analysis of how the number of subgoals generated at each node expansion affects the completeness. However, looking at Figures 6b, 7b, 8b, and 9b in the Appendix J of [1], it seems that the VQVAE is already "saturated" and increasing the number of VQVAE codes does not increase the number of generated subgoals. For instance, in Figure 7b, it appears as if the VQVAE generated 4, 4, and 7 valid subgoals for the given states, even though the requested number of subgoals was 64. Hence, given a VQVAE generative model, increasing the number of subgoals is very unlikely to lead to completeness. For the autoregressive network used in kSubS and AdaSubS, that could be verified separately in the future. > l.266 OOD idea We observed a similar pattern in Sokoban, although the advantage of HIPS-$\varepsilon$ is slightly smaller due to HIPS generalizing better than in Box-World. Please see Figure 6 in the common PDF for the results. In STP, generating OOD puzzles is impossible without increasing the board resolution, which would require re-training the networks, and in our preliminary experiments, we found that HIPS already generalizes well to increasing the number of cities in TSP, so the benefits of hybrid search are limited there. The budget for HIPS-$\varepsilon$ was 20,000 expansions, which was sufficient for solving all the evaluated problems. [1] Kujanpää, K., Pajarinen, J., & Ilin, A. (2023). Hierarchical Imitation Learning with Vector Quantized Models. ICML 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I acknowledge the importance of guaranteeing completeness, and I am pleased that you will include the justification. > The impact of tuning epsilon Thank you for the analysis, I think it's a nice insight into what's happening under the hood. Although it would be better to choose an environment where HIPS-$\varepsilon$ shows a greater advantage over HIPS than in TSP, in order to better understand the relation between low-level expansions and performance advantage. I don't demand providing another chart in this discussion, but I suggest adding it to the paper in later revisions. It looks like the benefit of using low-level expansions is related to the complexity of proposing a valid high-level subgoal (hence, more important in STP). Is that true? > Table 1: Why does your HIPS-e achieve (a little) worse results than HIPS in TSP? If it is just noise, I'm not sure if such an environment is relevant to this paper. I understand that you use it just because it was used in [1]. > l. 259. Number of subgoals Please explain how you handle invalid subgoals. In particular, how do you count them into the search budget? If you request 60 subgoals out of which 7 turn out to be valid, does it count as roughly 7 or 60 when calculating the budget for Table 1? It seems to me that you consider it as a single expansion, correct me if I'm wrong. That's unfortunate since it is way way more costly than a single step of low-level search. To be fair, I think you should include in the search budget the low-level steps used to verify the subgoals, both valid and invalid. Also, you should include running time comparisons between the methods. At least a simple mean running time, at least in the appendix. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments and questions. >Although it would be better to choose an environment where HIPS-$\varepsilon$ shows a greater advantage over HIPS than in TSP, in order to better understand the relation between low-level expansions and performance advantage. Just to clarify, we chose to perform the analysis on STP (Sliding Tile Puzzle), not TSP (Traveling Salesman Problem). Sliding Tile Puzzle is the environment where HIPS-$\varepsilon$ shows the greatest performance advantage over HIPS. We will write the environment name out explicitly in the caption of Figure 6 to avoid this confusion in the future. >It looks like the benefit of using low-level expansions is related to the complexity of proposing a valid high-level subgoal (hence, more important in STP). Is that true? We agree. >If it is just noise, I'm not sure if such an environment is relevant to this paper. I understand that you use it just because it was used in [1]. We agree, which is why we did not focus too much on it in our discussion. However, we included TSP for two reasons: 1. We did not want to cherry-pick environments from [1] and only select those where our method is useful 2. It illustrates that by choosing a suitable strategy ($\varepsilon \to 0$), our hybrid search has no disadvantages over HIPS even if the hybrid approach is not necessary, and even if a sub-optimal value of $\varepsilon$ is used (see Figure 3), the loss in performance is reasonable. >Please explain how you handle invalid subgoals. In particular, how do you count them into the search budget? If you request 60 subgoals out of which 7 turn out to be valid, does it count as roughly 7 or 60 when calculating the budget for Table 1? It seems to me that you consider it as a single expansion, correct me if I'm wrong. That's unfortunate since it is way way more costly than a single step of low-level search. To be fair, I think you should include in the search budget the low-level steps used to verify the subgoals, both valid and invalid. Also, you should include running time comparisons between the methods. At least a simple mean running time, at least in the appendix. Your understanding of how we count the search cost is correct: a node expansion always incurs a cost of one. The number of search node expansions has been used as the evaluation metric in prior work on subgoal search (kSubS, AdaSubS, HIPS), and we chose not to deviate from that. All high-level search methods that we used as baselines perform very many low-level environment steps per search node expansion, which is why we believe that the comparison to other high-level search methods is fair. For instance, kSubS uses a low-level search to verify the subgoals, which requires a substantial number of low-level environment steps. When comparing against low-level search methods, one node expansion is naturally more expensive. If the number of low-level environment steps needed for verifying the subgoals is used as the search cost, one high-level expansion can be roughly 500-1000 times as expensive as a low-level expansion in the worst-case scenario, depending on the environment and number of duplicate subgoals. This factor can be significantly reduced by parallelizing the subgoal verification. We believe that relying on a learned dynamics model to prune infeasible subgoals would also work for reducing the environment steps. However, HIPS-epsilon significantly outperforms low-level search in terms of node expansions, almost always with more than the factor of 1000 (see Table 12, for example). Thus, combining these two factors (and ignoring the benefits of parallelization), our method is superior to the low-level search in Sokoban, Box-World, and TSP. In the Sliding Tile Puzzle, where defining a suitable heuristic with prior knowledge is easy, our method is roughly on par with the low-level search in terms of the environment steps (depending on the value of W). We will modify Table 12 accordingly and include the running time comparisons in the appendix.
Summary: The paper considers solving complex planning problems with discrete action spaces and develops a novel hybrid search scheme that combines high-level sub-goal oriented search (aka hierarchical planning) with a complete low-level search scheme. The latter embodies a classical exhaustive search scheme that only considers low-level actions. The proposed approach is applied to an existing sub-goal oriented planning system called HIPS (Hierarchical Imitation Planning with Search). In contrast to the existing approaches, including HIPS, the new system is guaranteed to be complete, namely it will find a solution if one exists. Furthermore, the proposed enhanced HIPS is evaluated on four planning benchmarks which were also considered in previous work. The results demonstrate clearly the performance of the proposed approach compared with the baseline HIPS system as well as with strong existing offline reinforcement learning algorithms. Strengths: The paper is fairly well written and organised. The quality of the presentation is overall fairly good. The results are presented in a relatively clear manner so it's fairly easy to grasp the big picture. Weaknesses: My only concern is that the proposed approach looks fairly incremental compared with the existing work on HIPS. The main novelty seems to consist in adding the behaviour cloning policy to select low-level actions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The sliding puzzle and to some extent the box-world problems are considered to be fairly easy to solve by classical AI planners using some version of A* search. I was wondering how does the proposed HIPS enhancement compare with classical planners on this domain. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think the limitations of the proposed method are discussed fairly clearly in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and observations. > The main novelty seems to consist in adding the behaviour cloning policy to select low-level actions. We want to emphasize that not only do we add a new behavior cloning policy to efficiently solve a known problem of subgoal search methods, but we also analyze the impact it will have on the results, derive a new heuristic rule for efficiently using it in search (which is particularly important, see Table 4) and confirm that the empirical results match the expected ones. High-level search has been recognized as a promising research direction, as kSubS was published at NeurIPS 2021, AdaSubS at ICLR 2023 (Notable top-5 % = Oral), and HIPS at ICML 2023, so improving on these methods is relevant. Furthermore, we show that using the low-level actions not only guarantees completeness but can also improve the search performance on instances solvable by high-level search, which is a non-trivial result. Although we evaluated our approach on HIPS due to its strong performance, our framework can also be applied to other subgoal search methods such as kSubS and AdaSubS. Finally, we show promising OOD generalization capabilities, which are missing in many learning algorithms. The promising OOD results also open doors for new applications such as curriculum learning and the multi-task setting. > The sliding puzzle and to some extent the box-world problems are considered to be fairly easy to solve by classical AI planners using some version of A* search. I was wondering how does the proposed HIPS enhancement compare with classical planners on this domain. Defining a suitable heuristic for the Box-World problem (without prior knowledge about the solution path) is a highly non-trivial problem, which hampers our ability to apply the A* algorithm. If we naively apply Dijkstra's algorithm to it, we're bottlenecked by RAM before a solution is discovered (happens at ~700k node expansions). If we assume access to prior knowledge, a reasonable heuristic is to count the number of collected keys (which does not separate between distractors and correct ones) and subtract that from the goal length (requiring prior knowledge). This heuristic does not help us to reliably discover solutions (see Table 12 in the global rebuttal pdf) even if we perform WA* with W=10, and in most of the failure cases, we run out of RAM before a solution is discovered. For STP, note that the prior work in the planning domain has focused on the easier 4x4 variant, whereas we work on the 5x5 problem, which is considered significantly harder (see [1], pp. 71). Nevertheless, we can still use the Manhattan distance as a heuristic for A*. We experimented with a limit of 100,000 expansions (note that HIPS-$\varepsilon$ had a 69.5 % solution rate with 100 expansions and 93.8 % at 200). A* had a solution percentage of 0. With WA* and W=2, the solution % at 100,000 expansions was 10.2 %, which is significantly worse with 10^3 times more node expansions. For W=5, where the heuristic is used very greedily, the solution rate is 91.0 % at 100,000 expansions (see Table 12 in the rebuttal pdf). Even then, it is significantly inferior to HIPS-$\varepsilon$ in terms of expansions and requires applying prior knowledge for defining the heuristic, whereas HIPS-$\varepsilon$ does not assume any prior knowledge except the ability to recognize terminal states upon entering them. Finally, note that, for example, the HIPS paper [2] specifically used subgoal-based A* to generate the demonstrations for STP and Box-World because of how expensive the demonstration generation was with standard A*. [1] Russell, S. & Norvig, P. Artificial Intelligence: A Modern Approach. 3rd edition. [2] Kujanpää, K., Pajarinen, J., & Ilin, A. (2023). Hierarchical Imitation Learning with Vector Quantized Models. ICML 2023.
Rebuttal 1: Rebuttal: We want to thank all reviewers for taking the time to review our work and give feedback. Your insightful comments and observations are vital for improving the paper. We are grateful to the reviewers for appreciating the empirical results (all reviewers), the intuitive main idea (v7TG), clear motivation (jgQs), the broad evaluation and insightful comments (jgQs), the OOD application (v7TG, jgQs), and the presentation and writing (all reviewers). To address the questions of the reviewers, we attached a pdf with results from the following additional experiments: 1. OOD experiments on Sokoban with six boxes, when the model has been trained on Sokoban with four boxes (Figure 6). 2. Demonstrating how the number of low-level expansions made by the search and the number of low-level actions in the returned solutions are affected by the value of $\varepsilon$ in STP (Figure 7). 3. Comparing HIPS-$\varepsilon$ to classical planning algorithms in terms of node expansions, highlighting the difficulty of the problems HIPS-$\varepsilon$ is capable of solving (Table 12). We plan to add these to the final version to strengthen the paper even further. Furthermore, in the final version, we want to motivate why the completeness property is relevant (v7TG), improve the related work section to clarify the relation of our work to the prior work (i9rx), and make other adjustments based on the reviewer feedback. Pdf: /pdf/00cf182fa1127efd5e80144c4353004276010f6f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Kissing to Find a Match: Efficient Low-Rank Permutation Representation
Accept (poster)
Summary: The authors propose a provably exact and an approximative method for calculating permutation matrices requiring much lower memory than previous approaches. The algorithms are based on Kissing numbers and describe row relationships as their cosine value. This allows for representing the problem with significantly fewer values than the baseline n*n permutation matrix representation. Additionally, the authors propose a relaxation of this exact algorithm, utilizing the SoftMax operator, that alleviates the issues with the optimization for large problems. The authors also show a number of applications where their method is applicable. Strengths: I believe this is a pretty strong work with a very general and useful contribution. Strengths in more details: - A provably exact algorithm that significantly reduces the memory consumption of general permutation matrices. - Additionally, a relaxation is proposed that has better convergence properties in optimization. - The authors demonstrate on a number of applications that their method is applicable and useful. Weaknesses: First, I wanted to give a strong accept, but the experiments left me slightly in doubt. My main problem is the following: The experiments focus a little bit too much on the application side (which is amazing), but miss to do a general comparison to baselines and the proposed methods. I would have liked to see comparisons to SOTA algorithms (related works mention many) on finding permutation matrices, showing run-time, memory, and errors. While the exact algorithm will probably have 0 error, some other methods might have a little bit bigger while being orders of magnitude faster. Also, to my understanding, the exact algorithm breaks down with the problem size, when the SoftMax version is used, which is not exact anymore. Also, it would be good to see the breaking point of the exact algorithm where it does not converge anymore. In brief, a thorough comparison to the SOTA is missing. I still like the paper, but without such comparison (considering the theoretical value), I do not give a strong accept. If the authors can provide such an analysis to understand the trade-offs in their rebuttal, I consider improving the rating. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Experiments: - L191 is translation not considered on purpose? - L204-209 This is a little but unclear. Does the propose method always, without failure, find the correct permutation and transformation? If this is the case, this should be highlighted more. If not, an error value should be shown. - L209 Equally good results with Softmax. Does this mean that the approximation in this noise-free case can be compensated? - L215 refers to Fig.2, but that seems to show something different (Dense/Sparse) and not the topic the authors mention at this line. Typos: L135 "either exact" -> "either as exact" L156 "valdidate" -> "validate" L179 missing dot from the end of the sentence L230 "similariy" -> "similarity" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding the request for comparison to baseline methods, please refer to the general response to all reviewers, where we have shown comparisons to baseline methods within the functional maps framework [30]. #### Question 1 L191 is translation not considered on purpose? #### Answer 1 Yes, we didn’t consider translations, as the experiments were primarily focused on a proof of concept, and translation can be removed through centering. #### Question 2 L204-209 This is a little but unclear. Does the propose method always, without failure, find the correct permutation and transformation? If this is the case, this should be highlighted more. If not, an error value should be shown. #### Answer 2 In these experiments the proposed method always finds the correct permutation, and with it also the correct assignment of the transformed vectors. We will further highlight this in the final version. Nevertheless, we have small shiftings in the transformation of the vectors, for which we will add the accuracies (see the general response to all reviewers). #### Question 3 L209 Equally good results with Softmax. Does this mean that the approximation in this noise-free case can be compensated? #### Answer 3 Yes, for finding the correct assignment in this example, this is the case. #### Question 4 L215 refers to Fig. 2, but that seems to show something different (Dense/Sparse) and not the topic the authors mention at this line. #### Answer 4 In this line, we referred to the memory reduction, which is shown in Fig 2. But we agree with the reviewer that this point does not become clear in the submitted version. We will rewrite this part of the text to ensure that it is correctly understood. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' answers and the baseline comparison. I like the paper and improve my rating to strong accept.
Summary: This paper proposes a formulation for representing high-dimensional permutation matrices. The basic idea is to express the n x n permutation matrix as an elementwise non-linear function of a product of low-rank matrices V,W (n x m). The authors introduce the kissing number theory which gives the minimum number 'm' for which such a decomposition is provably possible and then show intuitive proofs that in scenarios when the decomposition is valid - their construction is appropriate. In addition, they also promote two well-known non-linearities (ReLU and SoftMax) which can be used in the optimization process for the representation. Results are demonstrated on two synthetic and one recent 3D shape-matching scenario. Broadly the experiments make an impression of a significantly reduced memory consumption and the ability to recover permutations. In the shape-matching example, the proposed formulation also slightly improves on the previous baseline. Strengths: - The core idea of this paper (low-rank representation of permutations with nonlinearities) is *very* interesting and potentially has a very wide impact in scenarios where matching between pointsets is a crucial problem (linear assignment, quadratic assignment etc) - Overall the writing of this paper is very good, and the background and related material on the kissing number theory has been introduced and explained in a way interesting way. - The benefits of the proposed construction are (1.) strong memory reduction in representing permutation matrices (2.) An accompanying optimization scheme that allows for recovering permutations Weaknesses: - Broadly, I felt the experimental section is very rudimentary. There are very few comparisons to conceptual baselines: namely using stochastic matrices, optimal transport, Hungarian algorithm and nearest neighbor-like methods. It is not clear from the experiments whether the obtained solutions are: not just some permutations, but the *correct* permutations. - More specifically, for the point cloud example in section 4.2, what is the accuracy of the recovered transformation $\Theta$ as a function of n? How do other conceptual baselines compare in this example both in terms of accuracy and memory complexity? - Despite the experiments on Marin et.al, perhaps a simpler and more convincing demonstration would be to use the proposed permutation representation in either or all of [43], [30], etc. where a linear/quadratic assignment is solved and then compare with the original methods (perhaps yielding an improvement in memory with a comparable or better accuracy) Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: See Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: I do not think this paper has any direct negative societal impact. Please see the weaknesses for technical limitations. Overall, this paper has a very interesting idea and a clever conceptual message on representing permutations. My biggest concern is whether the proposed construction is impactful in terms of ease of optimization and acceptable accuracies for the multitude of shape-matching problems that this can be applied to. Given the lack of comparisons to conceptual baselines (i.e. not in terms of a state-of-the art shape matching paper, but even simply comparing with previous permutation representations like stochastic matrices, or spectral decompositions - in any framework), I am inclined towards a weak reject at this point. I can be convinced of the gains in memory complexity of the proposed representation but am not yet sold on its applicability and accuracy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding the request to include conceptual baselines to compare in the point cloud example, we have included experiments that can be found in the general response to all reviewers under the topic "Comparisons with Permutation Baselines in [30]". These experiments include comparisons in accuracy, time, and memory performance for extracting point-wise correspondences, compared to the LAP solver, nearest-neighbors, optimal transport, and sinkhorn iteration, within the functional maps framework [30]. #### Question 1 It is not clear from the experiments whether the obtained solutions are: not just some permutations, but the correct permutations. #### Answer 1 In the LAP and QAP experiments, we evaluated the permutation matrix from of the relative error of the energy compared to the energy of the optimal solution. We will make this clear in the final version. #### Question 2 More specifically, for the point cloud example in section 4.2, what is the accuracy of the recovered transformation as a function of $n$? #### Answer 2 We add the accuracy values of the transformation (depending on $n$) in the form of measuring the distance between the true and the transformed point clouds to the final version (see the general response to all reviewers for the accuracy values).
Summary: The paper proposes a novel approach for representing permutation matrices with low-rank matrix factorisation. The method employs Kissing number theory to find the minimum rank necessary to represent the target matrix. This is often quite a bit lower than the rank of the original matrix, so allows for a more efficient memory representation. The paper also shows how the approach can be used in practice in various relevant problems, such as point cloud alignment or shape matching. Edit: I have read the rebuttal, and given the scores from other reviewers and I would like to keep my score. Strengths: * The work deals with a meaningful problem, and produces an elegant solution with wide ranging applicability. * The authors demonstrate the performance of their approach in various problems, demonstrating similar / better accuracy with a significant improvement in memory requirement. * The paper is very well written and easy to read. * The experiments are relevant. * The formulation is principled and includes several relevant proofs. Overall I believe this is a perfect NeurIPS paper. While I have little knowledge of Kissing number theory, the paper solves a very relevant problem in an elegant way, and demonstrates impressive and wide-ranging practical applicability in multiple domains. I therefore strongly believe the work should be accepted. Weaknesses: I have very little negative notes about the work, though, I could say that the ablation section could be expanded, e.g. by seeing how increasing the stochastic training k affects speed and accuracy. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: none Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: These are addressed at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your very positive response to our work. We have added an analysis of the relationship between $k$ and the optimization time to the general response.
Summary: This work addresses the problem of estimating large permutation matrices by approximating them with a low-rank factorization follow by a nonlinear mapping, thereby reducing the storage complexity from O(n^2) to O(n). The main contribution of the paper is i) a theoretical derivation and proof of the minimal required rank of the factorization matrices, and ii) the use of a nonlinearity, such that the full rank permutation matrix can be restored *exactly*. i) is based on the Kissing number (or bounds on it), while the nonlinearity in ii) can be a ReLu. Importantly, the possibility to exactly represent any permutation matrix is in strong contrast to direct low-rank factorizations which can only approximate the permutation matrix (since unable to recover the full rank). The authors propose practical solutions for the optimization: A smooth version of the nonlinearity (softmax), and an optimization scheme (inspired by stochastic optimization; gradients are considering a row of the factorization matrices at a time) that approximates the full objective, but never requires to build the full permutation matrix, thus fully leveraging the compact representation and resulting storage savings. Experimental results are extensive and demonstrate the applicability of the method on point cloud alignment, linear and quadratic assignment problems, and shape matching. Strengths: **S1** Proposition 1 and Eq. (7) represent a novel contribution. The insight that a non-linear mapping of a matrix factorization is able to recover the full rank permutation matrix is a strong contribution and is of interest for the wider community. **S2** The proof of the minimal require factorization matrix rank via the Kissing number is an interesting theoretical contribution. Also practically it provides a clear guidance on the required size of factorization matrices such that the exact permutation matrix can be recovered. **S3** The proposed stochastic optimization in Sec. 4.1 provides a practical algorithm for optimizing the permutation matrix, while leveraging the compact representation (the full permutation matrix is never built, but only computed element-wise at a time). The non-smoothness of the ReLU is addressed by soft-max function with a controllable temperature parameter that allows to approximate the exact solution with desired accuracy over the course of the optimization procedure. **S4** Experimental results are performed across different application domains and demonstrate the usefulness of the proposed approach. Weaknesses: None, but I'm also not an expert in the area. I'm especially not knowledgeable about related work and thus can not judge the novelty of the proposed approach. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **Q1** Is there a relation between the Kissing number and the ReLU? Is it conceivable that there exists another non-linear function that allows an even lower rank factorization while still being able to represent that exact permutation matrix? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Question Is there a relation between the Kissing number and the ReLU? Is it conceivable that there exists another non-linear function that allows an even lower rank factorization while still being able to represent that exact permutation matrix? #### Answer The ReLU merely serves as a type of thresholding operation and could be replaced by any other function that is zero for all values below a certain threshold and one for an input value of one. In fact, an arbitrarily low rank $\geq 2$ can still allow to represent any permutation exactly by letting the threshold approach one. Yet, since gradients of any entry below the threshold are zero, such a representation becomes increasingly difficult to optimize (please also see the answer for reviewer AkAs). We will add a discussion on this aspect in the revised version of this paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their comments that helped us improve the presentation of our work. In the following, we address the concerns that have been raised, and we provide additional experimental results: # Comparisons with Permutation Baselines in [30] We conduct an additional experiment where we compare different classical optimization approaches and to point cloud alignment experiment (see Section 4.2) in the context of the functional maps framework [30]. We compare our method, that calculates correspondences the same way as described in the Point Cloud Alignment experiment in Section 4.2., to a general linear assignment problem (LAP) solver (specifically the Jonker-Volgenant algorithm from sklearn.linear\_sum\_assignment, which is in practice faster than the Hungarian algorithm), nearest neighbor computation, optimal transport (as implemented in the python POT package), and stochastic matrices generated by Sinkhorn iterations. The experiment setup is as follows (all details will be included in the final version): The goal is to extract a point-to-point correspondence between two shapes $X, Y$ from a $m \times m$-dimensional functional map [30] where $m$ is much smaller than the number of vertices in $X$ and $Y$. A possible way to do this is by doing a nearest neighbor (NN) search of the spectral point representations $\Phi_X, \Phi_Y \in \mathbb{R}^{n \times m}$ aligned by the functional map as proposed in the original paper (see [30, Section 6.1]). However, it is also possible to find an assignment between all rows of $\Phi_X, \Phi_Y$ by other means, for example by solving a linear assignment problem if a bijection is desired. This is exactly the point cloud alignment setting from our experiments in Section 4.2 with a small amount of noise in the point clouds, and we show that our method outperforms all baselines in terms of geodesic error of the final matching and shows positive trends in terms of runtime and memory consumption. We use the FAUST registrations [3] with the original $6890$ vertices and a downsampled version to $502$ vertices for those experiments. The rows of $\Phi_X, \Phi_Y$ are generated by applying the ground-truth functional map to the spectral embedding of each vertex and can be assumed to be permuted, noisy versions of each other such that we can directly use them as $V$ and $W$ for our method. We will add more details about the exact setup for all methods in the final version. The results of this comparison can be seen in Table 1 in the response PDF. Our method has the best runtime/memory ratio on the higher dimensional example apart from nearest neighbors (which is known to become quite unreliable if the alignment is not tight) while still providing the most accurate results. # Clarification of the Method Strengths Our work proposes a novel efficient approach for representing permutations, which inherently provides a guaranteed minimal decomposition rank $m$ (and therefore the minimal memory requirements) necessary for representing a problem of size $n$. This allows tackling large problems whose size $n$ could not be handled by existing methods. We believe that the strength of our method lies in its ability to efficiently represent permutation matrices while still being differentiable and enabling techniques such as sparse/stochastic optimization, which will be particularly efficient in a supervised learning setup. However, it is not a plug-and-play solution that can be quickly incorporated into any pipeline. Therefore, a direct but fair comparison to other learning-based permutation predictors like Sinkhorn layers is not straightforward to implement but requires individual adaptions of our method for each setting. We showed that it is possible to considerably improve memory requirements by including it in Marin et al. [26], and we strongly believe this is possible in other methods with more research in this direction. # Further Experimental Details on Point Cloud Alignment and Shape Matching We include additional accuracy values on the prediction of a linear transformation over point clouds, by measuring the distance between the true point cloud and its transformed counterpart. We add these additional accuracy values for each problem size ($n$) which were previously outlined in the initial version of our work (see Table 2 in the response PDF). To further expand the ablation study, we will include a comparison of the training speed for stochastic optimization in the shape-matching experiment, depending on the stochastic variable $k$, to the results in Figure 4b. The values of this additional experiment can be found in Table 3 in the response PDF. Pdf: /pdf/264fd1cce9719eb38377efc1976280d9c5889851.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a novel way to decompose a permutation matrix into two low-rank matrices so that a significant amount of space can be saved to store a permutation. Then the authors further implement such decomposition to solve practical tasks of point cloud alignment, assignment problem and shape matching. Strengths: Overall, the paper is well-written and clearly explains the heuristics behind the method. The method of using Kissing numbers and non-linearity to perform low-rank decomposition of a permutation matrix is very interesting and can possibly inspire future research. Based on Fig 5, the method indeed provides memory saving for supervised learning. Weaknesses: The most troubling weakness of this method lies in its value of application. Specifically, I have the following concerns: 1. The method introduces two complex problems to optimization: bi-variable structure and non-linearity. As mentioned by the authors, this requires devising a non-trivial, problem-specific adaptation for each problem. 2. To avoid forming an $n\times n$ matrix, the authors propose to only optimize over a handful of entries, which basically requires knowing the ground truth permutation (or assuming some sparse structures as in LAP). This leads to a very limited set of application scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Kissing number is based on a threshold of arccos(0.5), does this mean that if we switch to a smaller angle, we can fit more vectors in a unit sphere, thus save more space to store a permutation? If so, why don't we do so? 2. When both $V$ and $W$ need to be optimized, are they optimized in parallel or alternatingly? Specifically which optimization algorithm is used for your numerical tests? 3. How do you implement Stochastic Optimization for LAP and QAP when $A$ is dense? 4. In the part (b) of Fig 5, what is the scale of y axis? The term "Relative" here is very confusing and without explanation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations of the work are mostly mentioned in the weakness section. The authors indeed addressed them but it is still quite confusing to read for the first time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Question 1 Kissing number is based on a threshold of arccos(0.5), does this mean that if we switch to a smaller angle, we can fit more vectors in a unit sphere, thus save more space to store a permutation? If so, why don't we do so? #### Answer 1 Yes, it is correct that the threshold does not need to be 0.5. In fact, if we pick any $V \in \mathbb{R}^{n \times m}$, $m<<n$, with normalized rows that are non-repeating, then the threshold $\mu := \max_{i\neq j} \langle V_{i,:} , V_{j,:} \rangle$ allows to use our approach for representing a permutation $P$ by choosing $W=PV$. Even for Gaussian random matrices, this allows to reduce the rank exponentially, see e.g. "Limiting Laws of Coherence of Random Matrices with Applications to Testing Covariance Structure and Construction of Compressed Sensing Matrices." by Cai and Jiang, where it is shown that $\mu$ behaves like $2\sqrt{\frac{\log(n)}{m}}$ if $n$ is an exponential of $m$. Yet, the ReLU indeed acts as a thresholding such that thresholds approaching 1 make it extremely difficult to still optimize the resulting objective. We found a threshold of $0.5$ to yield a good compromise between the ability to optimize (with first-order methods) and an accurate representation of permutations via a low-rank factorization while benefitting from some additional literature deriving theoretical bounds on the factorized dimension. Adaptive thresholds (e.g. using continuation schemes) and/or softmax approaches with iteratively increasing temperature (converging from soft- to hard-(arg)-max) are of course possible fine-tuning options to improve results in particular applications. #### Question 2 When both V and W need to be optimized, are they optimized in parallel or alternatingly? Specifically which optimization algorithm is used for your numerical tests? #### Answer 2 The estimation of V and W is performed in parallel using the Adam optimization algorithm. We will clarify this aspect in the final version of the paper. We will also upload our code, and make it publicity available upon acceptance. #### Question 3 How do you implement Stochastic Optimization for LAP and QAP when $A$ is dense? #### Answer 3 For LAP the similarity matrix $A$ has the dimension $n \times n$ (the same dimension as the permutation matrix), for QAP the similarity matrix is even larger with $A \in \mathbb{R}^{n^2 \times n^2}$, or for the Koopmans and Beckmann formulation we have two matrices of the dimension $n \times n$. If those matrices have to be computed densely, it would require as much memory as a fully calculated permutation matrix or even more, so the stochastic optimization would not make any difference. Yet, considering that the costs are still sums over many terms, stochastic/alternating optimization schemes that only consider a few terms at a time and compute costs on-the-fly could be implemented. We have not tested our approach in this respect yet. #### Question 4 In the part (b) of Fig 5, what is the scale of y axis? The term "Relative" here is very confusing and without explanation. #### Answer 4 The term “relative” means, that the errors are calculated relative to the error of the full training by [26] for $n = 1000$, therefore it is calculated by $error_{relative} = \frac{error_{ours} - error_{[26]}}{error_{[26]}}$. We will make this more clear in the final version. #### Unknown Ground Truth and Limitations - To avoid forming an matrix, the authors propose to only optimize over a handful of entries, which basically requires knowing the ground truth permutation (or assuming some sparse structures as in LAP). This leads to a very limited set of application scenarios. - The limitations of the work are mostly mentioned in the weakness section. The authors indeed addressed them but it is still quite confusing to read for the first time. #### Answer We will make the limitations of our approach more visible by collecting them in a separate section in the final version of our paper. The approach of (sparse) memory-saving optimization is applicable to any fully supervised approach for learning permutation matrices. Additionally, we anticipate that extensions to self-supervised settings like linear assignment problems are possible by turning to stochastic/alternating optimization schemes that do not consider all terms of the cost function at a time. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions. However, the weakness I mentioned in my review remains. Specifically, the method can only be implemented in supervised learning scenarios, missing out on a wide range of applications. As a result, I will not change my current rating. --- Reply to Comment 1.1.1: Comment: Thank you for your consideration! While we are hopeful that an extension to a fully unsupervised setting is possible, we consider supervised machine learning to be an extremely important topic for the NeurIPS conference audience and would like to point out, that a large number of learning techniques published at NeurIPS are fully supervised techniques (only).
null
null
null
null
null
null
Model-Based Reparameterization Policy Gradient Methods: Theory and Practical Algorithms
Accept (poster)
Summary: When using re-parameterization (RP) gradient estimators for policy gradient methods (PGM) the optimisation landscape becomes chaotic and non-smooth, with exploding gradient variance. This paper examines RP gradient estimators for policy gradient methods. First, the authors theoretically examine RP PGM. Their theoretical examination highlights that the smoothness of the dynamic model's and the policy's function approximators has a large impact on the quality of the gradient estimator. Following this theoretically guided insight, they propose to enforce smoothness by applying spectral normalisation to all layers of the dynamics and policy's neural networks. Their results suggest that this simple modification mitigates the exploding gradient issue associated with RP PGM. Overall, I think this is a nice paper as it presents a practically significant result backed by theory. Strengths: The paper provides an interesting insight into how the smoothness of function approximators influences the performance of reparameterisation gradients. In particular, the paper builds a practical algorithm on top of theoretical insights and I think this is its greatest strength; as it provides a strong contribution to the community. The paper's contributions are clear and the work is positioned well against the literature. The theoretical claims appear sound but some of the derivations in the appendix are out of the scope of my expertise. Weaknesses: The paper's biggest weakness is its clarity. First of all, the figures are of very low quality and in some cases they are illegible. There are also no figures until the results section on page 8. I think the paper would benefit by moving some of the figures earlier in the paper and making them bigger. I also have concerns regarding reproducibility as no hyperparameters are reported in the paper/appendix. In general, the figures are of very low quality. - Figure 1 - I can't read the title or axis labels. These should be made bigger. - The figure is also very crowded. - I cannot see the variance for several algorithms. - Consider running more random seeds to smooth the curves. - Consider having fewer plots in a row so that they can be bigger. Put the rest in the appendix. - Figures 4/6/7 - I can't read the title or axis labels. These should be made bigger. - Do you need both the walker2d and hopper figures in the main paper? The figures would be easier to read if they were bigger, for example, if there were only three plots per row. Perhaps the hopper results could be moved to the appendix? - What does the shading represent? variance/std? How many seeds were used in the experiments to calculate the variance? - In Figure 7 the authors refer to the third/last columns. Consider labelling the columns a-f. Reproducibility - How many seeds were used for each experiment? - What hyperparameters were used? Learning rates, optimiser, activation functions, batch size, width of hidden layers, number of epochs, discount factor, early stopping callback, etc. I cannot see any of these details in the paper or appendix Some citations use arXiv versions instead of conference publications. - 11 is published at ICLR 2020 - 36 is published at ICLR 2018 Minor comments: - Line 127: Gleaned seems an odd word to use here. - Line 149: $\nabla_{a} \hat{f_{\Psi}}$ is repeated twice. Should one of them be $\nabla_{s} \hat{f_{\Psi}}$? - Line 221/222: this sentence doesn't read very well. - Line 295: should $g(x)$ be $g_{i}(x)$? - Section 7 leads straight into 7.1 - Consider adding an overview of Section 7 before Section 7.1 so that the reader knows what to expect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I will increase my rating if the authors address the issues raised in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for identifying our work's soundness and technical contributions. The valuable comments have helped us improve our manuscript. Below are our specific responses to the questions raised by the reviewer: --- **Weakness 1: Clarity of the figures in experiments.** - We sincerely appreciate the valuable suggestions provided by the reviewer. Due to space limitations, we chose to defer the larger-size versions of the experiment figures, namely Figure 1, 4, 6, and 7, to Appendix D.5. - We will make the following changes according to the reviewer's suggestions: We will modify Figure 1 by splitting it into two rows and move the hopper ablation results in Figure 4, 6, 7 to the Appendix. By implementing these changes, each row within a figure will contain only three plots, improving the overall clarity and readability of the figures. --- **Weakness 2: What does the shading represent? How many seeds were used in the experiments to calculate the variance?** The shaded regions in the figures correspond to standard deviation using $6$ different random seeds. We will make this clear in our paper. --- **Weakness 3: In Figure 7 the authors refer to the third/last columns. Consider labelling the columns a-f.** We thank the reviewer for the suggestion and will revise our manuscript accordingly. --- **Weakness 4: What hyperparameters were used?** The hyperparameters used in our experiments are listed in the following table, which we will also add to our manuscript. --- | Hyper-parameter | Optimizer | Actor learning rate | Critic learning rate | Model learning rate | Reward learning rate | Replay buffer capacity | Mini-batch size | Discount factor | Target update rate | Activation function | Actor hidden dim | Actor hidden depth | Critic hidden dim | Critic hidden depth | Model hidden dim | Model hidden depth | Reward hidden dim | Reward hidden depth | |----------------|-----------|---------------------|----------------------|---------------------|----------------------|------------------------|-----------------|-----------------|--------------------|---------------------|------------------|--------------------|-------------------|---------------------|------------------|--------------------|-------------------|---------------------| | | Adam | 1e-4 | 1e-4 | 1e-3 | 1e-3 | 1e6 | 512 | 0.99 | 5e-3 | ReLU | 512 | 4 | 512 | 4 | 256 | 5 | 512 | 2 | --- **Weakness 5: Some citations use arXiv versions instead of conference publications.** We will fix this in our manuscript. --- **Minor 1: Line 127: Gleaned seems an odd word to use here.** We will change "gleaned" to "unrolled". --- **Minor 2: Line 149: $\nabla_a \hat{f}_\psi$ is repeated twice.** The reviewer is correct that latter one of them should be $\nabla_s \hat{f}_\psi$. We thank the reviewer for pointing this out and will fix it in our manuscript. --- **Minor 3: Line 221/222: this sentence doesn't read very well.** We will revise the text in Line 221/222 to "Proposition 5.2 reveals how the convergence rate changes with the variance and bias of the gradient estimators." --- **Minor 4: Line 295: should $g(x)$ be $g_i(x)$?** Yes. We thank the reviewer for pointing this out and will fix it in our manuscript. --- **Minor 5: Consider adding an overview of Section 7 before Section 7.1 so that the reader knows what to expect.** We appreciate the valuable suggestion from the reviewer. We will incorporate the following paragraph before Section 7.1 into our manuscript: "In this section, we provide empirical studies to support our theoretical findings. Firstly, in Section 7.1, we present a comprehensive evaluation of multiple algorithms derived from the proposed RP PGM framework. Using the Mujoco control tasks as our experimental domain, we compare these algorithms against various baselines to assess their performance. Secondly, in Section 7.2, we thoroughly examine the optimization challenges inherent in vanilla RP PGMs, which are characterized by issues such as exploding gradient variance and highly non-smooth loss landscapes. Subsequently, in Section 7.3, we demonstrate the effectiveness of employing smoothness regularization techniques to address these issues. Additionally, we conduct ablation studies to reveal the distinct roles played by the gradient variance and bias during training." --- We hope the reviewer could consider raising the score if we resolved the reviewer's concerns. We would be happy to have further discussions if the reviewer has any additional questions or comments. --- Rebuttal Comment 1.1: Comment: Thank you for the clear yet detailed rebuttal. I am happy that you will split the figures to have only 3 plots per row. I am generally happy with your changes and will increase my score. I would still advise the authors to increase the font size of the text in all of the figures. I also want to remind the authors that they can use an extra page for the camera-ready submission (I think), so perhaps some of the figures can stay in the main paper and do not need to be moved to the appendix. --- Reply to Comment 1.1.1: Title: Response Comment: Dear Reviewer iodf, Thank you for taking the time to review our paper! We sincerely appreciate your feedback and are glad to hear that you will raise your score. Your suggestions are highly valuable, and we will carefully incorporate them into our revised manuscript, including the font size of the figure text and the figure layout. Your insightful comments have greatly contributed to improving the overall quality of our work. Best regards,\ Authors
Summary: This paper studies differentiable model-based reparametrized policy gradient methods (RP-PGMs) with a particular focus on mitigating policy gradient variance and bias under longer horizon model rollouts to benefit agent learning and convergence. The paper introduces theorems on the bounds of policy gradient variance and bias in terms of the rollout horizon length and the lipschitz constant (``smoothness'') of the model and policy networks. Building upon the theory foundations, the paper then proposes to use Spectral Normalization on network parameters to minimize policy gradient variance and bias. Experiments on various control tasks demonstrate the effectiveness of the proposed approach. Strengths: - The paper is well-written. The theoretical and empirical findings are organized in a structured and smooth manner that is comfortable to read. - The proposed spectral normalization approach to mitigate policy gradient variance and bias under longer model unroll horizons is both theoretically sound and empirically justified. - The main theoretical results provide a solid theory underpinning for the convergence of reparametrized policy gradient methods. Weaknesses: - In this paper, authors set the overall spectral norm of networks to be 1. It would be interesting to investigate the impact on gradient variance, bias, and policy return when the network's spectral norm is set to a value lower than 1 during training. In particular, is there a tradeoff between policy return and gradient variance & bias? - From Fig. 4, setting a longer model rollout horizon ($h=8/10/15$) seems to harm the sample complexity of policies, though they still converge to similar returns as the shorter horizon cases ($h=3$). It would be interesting to further empirically explore the impact of even longer $h$ on agent sample complexity (e.g., $h=20/50$). It would also be interesting to explore tasks where longer horizon model rollouts offer performance advantages over shorter horizon model rollouts. Investigating such scenarios would enhance the applicability and generalizability of the proposed approach. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See "weaknesses". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some further discussions on limitations could be added. For example, future works can explore tasks with higher dimensional observations (e.g., visual input), higher-dimensional action / control outputs (e.g., Humanoid), and alternative networks like CNNs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for identifying our work's soundness and technical contributions. The valuable comments have helped us improve our manuscript. Below are our specific responses to the questions raised by the reviewer: --- **Weakness 1: It would be interesting to investigate the impact on gradient variance, bias, and policy return when the network's spectral norm is set to a value lower than 1 during training. In particular, is there a tradeoff between policy return and gradient variance & bias?** - According to our theory and experiments, adjusting the spectral norm to a value of 1 effectively addresses the issue of exploding gradient variance and the associated optimization problems. - Enforcing the spectral norm to a smaller value has the potential to further reduce the gradient variance at the expense of the networks' approximation capacity. This tradeoff indicates that setting the spectral norm to a smaller value near $1$ has the potential to reach higher policy returns in certain tasks. For instance, as demonstrated in the **Figure 3 in the PDF**, we observe that setting the spectral norm to both $1$ and $0.8$ yields comparable results in the half-cheetah locomotion tasks. These settings outperform the vanilla RP-DP when spectral normalization is not applied, as well as RP-DP with a spectral norm of $0.5$. The latter exhibits increased gradient variance and gradient bias, respectively, contributing to the inferior performance. - Notably, the optimal spectral norm value can be highly task-dependent. For example, when the Lipschitz of the system dynamics is less than $1$, models with smaller spectral norms are still able to approximate the dynamics effectively. In contrast, in stiff systems, enforcing a small spectral norm may result in significant model error and gradient bias due to the decreased approximation capacity. Therefore, in most of our experimental tasks, we recommend using standard spectral normalization as it strikes a good balance between these considerations. --- **Weakness 2: It would be interesting to further empirically explore the impact of even longer $h$ on agent sample complexity (e.g., $h=20/50$). It would also be interesting to explore tasks where longer horizon model rollouts offer performance advantages over shorter horizon model rollouts.** - Based on our experimental observations, we found that utilizing longer expansion steps (e.g., $8$ in the hopper and walker2d tasks) combined with spectral normalization leads to improved performance compared to selecting shorter expansion steps (e.g., $h\leq 5$ as commonly used in previous methods [1, 2]). - However, setting an even larger $h$ value, such as $20/50$, can negatively impact training due to the compounding effect of model errors and the significant bias. This observation is in line with our theoretical results from Proposition 5.7, where we demonstrated that when model error is large, the optimal unroll step should decrease to rely more on the critic. - Addressing the compounding error and bias issue by learning more accurate multi-step models holds great potential as a promising avenue for future research. Once such models are developed, larger values of $h$ can potentially become advantageous, as the model surpasses the critic in accuracy. This assertion finds partial support in the results presented in Appendix D.4, where we demonstrated that learning more accurate models (e.g., by incorporating directional derivative error) leads to further performance improvements. --- **Limitation: Some further discussions on limitations could be added.** We thank the reviewer for the insightful suggestions. We would add the following discussions on the limitations of our work in a later version of our manuscript: While the proposed framework and analysis are applicable to general MDP settings, our current experiments do not cover control tasks with high dimensional inputs, such as visual observations. Additionally, our use of multi-layer perceptron networks as dynamics models restricts us from tackling more complex image input tasks. Exploring alternative model designs, such as CNN and latent models, would be a fascinating avenue for future research that we intend to pursue. --- We hope the above response resolves your questions and we would be happy to have further discussions if you have any additional questions or comments. --- [1] Clavera et al. ''Model-augmented actor-critic: Backpropagating through paths.''\ [2] Amos et al. ''On the model-based stochastic value gradient for continuous reinforcement learning.'' --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks authors for the rebuttal! I'd like to keep by current ratings as they are already high.
Summary: This paper examines reparameterization policy gradient methods in model-based reinforcement learning. It investigates the relationship between the convergence rate, bias and variance of reparameterization policy-gradient, smoothness of the model, and approximation error. Based on the theoretical analysis, it further proposes a spectral normalization method to enforce smoothness on the model and policy. Strengths: 1. The paper is well-written and easy to follow. 2. It is interesting to note how different parts of MB PR PGMs interact with each other, and how the smoothness of the model affects the model expansion steps. 3. The experimental results demonstrate that applying spectral normalization to regularize the model and policy leads to improved performance and enables longer model expansion. Weaknesses: 1. The application of spectral normalization to deep networks is not new. 2. Applying spectral normalization limits both the model and policy capacity, which can result in an increasing gradient bias and a slower convergence rate. The trade-off between gradient variance and bias has not been thoroughly studied in experiments. (e.g. Section 7.3 only studies variance and bias on two disjoint environment suites) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for identifying our work's soundness and technical contributions. The valuable comments have helped us improve our manuscript. Below are our specific responses to the questions raised by the reviewer: --- **Weakness 1: The application of spectral normalization to deep networks is not new.** - Previous studies have primarily focused on utilizing spectral normalization to tackle training instability problems associated with deep neural networks, such as [1, 2]. - In contrast, our spectral normalization method is rooted in a theoretical analysis of model-based reparameterization policy gradient methods and aims to address the issue of exploding variance that can arise when employing lengthy model unrolls. - Additionally, our investigation in Appendix D.3 reveals that applying spectral normalization to other model-based reinforcement learning algorithms can yield adverse effects. This suggests that the applicability of smoothness regularization is not universal, and we specifically employ spectral normalization based on theoretical motivations, tailored for model-based reparameterization policy gradient methods. --- **Weakness 2: The trade-off between gradient variance and bias has not been thoroughly studied in experiments. (e.g. Section 7.3 only studies variance and bias on two disjoint environment suites).** - In our ablation studies on the gradient bias, we demonstrated a consistent result: even in cases where the bias is minimal (e.g., RP-DP without spectral normalization) or completely absent (e.g., analytic policy gradient approach), the amplified variance can still lead to poor performance outcomes. - The task designs and agent configurations are the same in the Mujoco and dFlex environment. We opted to use the *differentiable* dFlex simulator in order to implement the analytic policy gradient approach. This choice also enhances the precision and computational efficiency of our analysis as the bias can be calculated by directly comparing the model-based RP gradient and the analytic gradient. --- We hope the above response resolves your questions and we would be happy to have further discussions if you have any additional questions or comments. --- [1] Miyato et al. ''Spectral normalization for generative adversarial networks.''\ [2] Bjorck et al. ''Towards deeper deep reinforcement learning.'' --- Rebuttal Comment 1.1: Comment: Dear Reviewer yXDQ, As we are approaching the midpoint of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have. Your constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. Best, Authors
Summary: The paper theoretically analyzes the reparameterization policy gradient estimator’s bias and variance in reinforcement learning optimization and provide results characterizing the optimization convergence using such gradient estimators. It then proposes to apply spectral normalization (dividing the linear weight matrix by its largest singular value to make the Lipschitz constant upper bounded by $1$) on the learned world model and learned policy network. Empirically, they observe the spectral normalization improves the variance of the RP gradient estimator and improves the performance of RP to be comparable/better than other RL methods (including likelihood ratio methods). Strengths: 1. The paper does a good job setting up the background and context about the policy gradient estimation methods in reinforcement learning. 2. The experimental results confirm the benefits of applying spectral normalization in reparameterization policy gradient methods when the model expansion steps are large. Weaknesses: 1. Despite having an extensive background discussion on policy gradient, the paper is very brief on describing the formula and implementation specifics of model derivatives on predictions (DP) and model derivatives on Real Samples (DR): Equation (4.1) and (4.2) are plain tautology (RHS only expands the reward function $J$) and provides no information the backward recursive structure of the gradient estimators. 2. Assumption 5.3 on Lipschitz Continuity. In the paper, the authors assume a Lipschitz constant on the learned world model $f_\psi$. However, this neglects the fact that the learned world model is also changing as the learning progresses. Without spectral normalization of the world model, it is conceivable that the world model might have a growing Lipschitz constant over the update steps $T$. It’s not clear to me that this assumption has already taken into account of this update time aspect and deserves further explanation in the paper. 3. The paper conducts experiments on Mujoco tasks. To my understanding, Mujoco tasks only have randomness in the initialization state, but doesn’t have randomness in the state transition. As a result, the assumptions on stochastic transitioning environment made in the paper don’t seem to hold. It would be necessary for the authors to clarify whether this is the case or why they haven’t experimented with environments with greater randomness (or make Mujoco random). 4. Convergence theory is non-informative. - In proposition 5.2, to have the learned model converge to a stationary point, we want the LHS (minimum gradient’s squared 2-norm encountered so far) to be decreasing as a function of $T$. However, in the upper bound on the right hand side, the first term contains $\mathbf{E}[J(\pi_{\theta_T}) - J(\pi_{\theta_1})] $ which should increase as $T$ increases if the optimization is making progress in maximizing the value function. The relationship between this term and its denominator $T$ should be further discussed. Besides, the second term on the right hand side has a term $O(\frac{\sum_{t=0}^{T-1} v_t}{T})$. In the Proposition 5.4, the authors provide a $O(1)$ bound for $v_t$ for a fixed number of gradient estimates $N$. Thus the term $O(\frac{\sum_{t=0}^{T-1} v_t}{T})$ would only be $O(1)$. Hence it’s not clear to me this theory can capture the empirical observation that training using reparameterization gradient can converge to approximate local (or even global) maxima (which are stationary points). - In Proposition 5.7, the author gives a big O notation for the optimal model expansion step $h^*$ for update iteration $t$. This bound depends on the gradient errors $\epsilon_{v, t}$ and $\epsilon_{f, t}$, both of which are unknown values that depend on the ground truth value function and world model. As a result, it’s not clear whether this theory can offer any practical guidance. Besides, this optimal expansion step is update-iteration dependent (depends on $t$) and it’s not clear how to practically instantiate such an h-schedule. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. **Correlated randomness over time** Line 129 and 138 are reusing the same random variable $\zeta$ and $\xi$ for all time steps’ action sampling and environment transition in the same rollout. However, the randomness in action sampling and the randomness in state transition shouldn’t be correlated over different time steps. Can the authors clarify why these variables are shared for a given rollout trajectory? 2. **Are there other ways to trade off bias for variance?** The authors propose to use spectral normalization to reduce the variance at the cost of bias. Another way to potentially trade off bias for variance is to use something similar to truncated back propagation through time. In this case, one could imagine using a very small time horizon and completely ignore the discounted rewards after a certain time step (currently captured by the learned critic function). How would such a (high bias, low variance) remedy perform in comparison to the SN methods on the RL tasks considered in this paper? 3. **Would there be sufficient incentive to use longer expansion step + spectral normalization?** Looking at the experiment figures 4, 5, and 6, it seems that the performance of RP gradient only starts to deteriorate for longer unroll length (h > 5). When h ≤ 5, it seems that using SN doesn’t really improve the performance. In this case, why wouldn’t researchers just choose to use a short model expansion step (together with a learned critic) and not to use the spectral normalization approach proposed by the authors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper doesn’t have much discussion on the limitations. I would encourage the authors discuss what they think are the limitations in their theoretical and experimental results. I don’t think negative societal impacts are relevant to this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Below are our specific responses to the questions raised by the reviewer: **Weakness 1: Eq. 4.1 and 4.2 are plain tautology and provides no information the backward recursive structure of the gradient estimators.** - Eq. 4.1 and 4.2 (or A.9) depict the two variants of RP gradients where the sole distinction lies in the incorporated noise variables $\zeta$ and $\xi$, which are sampled from predefined distributions and inferred from real samples, respectively. - These equations first compute the value and then obtain its first-order gradient w.r.t. $\theta$, whose recursive expression is given in Appendix A. In practice, the gradient is computed using modern deep learning libraries by automatic differentiation. **Weakness 2: The model might have a large Lipschitz constant over the update steps.** - The Lipschitz constant of the *learned* model is a global coefficient that can be enforced by constraints or model designs like SN. - In fact, what we aim to address is the case when the model has a large Lipschitz constant that can result in significant variance in gradients. In such cases, it becomes crucial to employ smoothness regularization techniques to effectively tackle the optimization challenges that arise. **Weakness 3: Randomness in the state transition.** - Our results hold for MDPs with both stochastic and deterministic transitions. For the latter case, the noise $\xi^*$ would be $0$ for the dynamics $s'=f(s, a, \xi^*)$. However, the issue of exploding variance still exists due to the randomness in the initial state and the stochastic policy. - In **Figure 1 in the PDF**, we report the results in Mujoco tasks with stochastic state transitions. **Weakness 4.1: The relationship between $\mathbb{E}[J(\pi_{\theta_T}) - J(\pi_{\theta_1})]$ and $T$ in Proposition 5.2. Besides, the RHS has a variance term that would be $O(1)$.** - For MDPs with bounded reward, e.g., if $|r(s, a)| \leq r_m$ as stated in Proposition 5.2, then $\mathbb{E}[J(\pi_{\theta_T}) - J(\pi_{\theta_1})]\leq 2r_m$, which is a universal upper bound. This is because $|V^\pi(s_0)|=(1-\gamma)|\mathbb{E}_\pi[\sum_i\gamma^i r(s_i, a_i)]|\leq r_m$. - Proposition 5.4 shows that $v_t$ scales as $O(1/N)$. It suffices to choose a reasonably large value for $N$ for convergence, as evidenced by Corollary 5.9. Even so, we emphasize that Proposition 5.2 primarily serves to justify our SN method by characterizing the roles of gradient bias and variance in convergence. To serve this purpose, we explicitly express the bias and variance terms while imposing minimal assumptions. As a result, the upper bound in Proposition 5.2 may appear looser compared to the typical bounds in model-free RL analysis, which involve fewer error sources and stronger assumptions (e.g., the bounded variance Assumption 4.4 in [1].). **Weakness 4.2: Optimal model expansion step in practice.** Proposition 5.7 aims to shed light on the factors influencing $h^*$ and its dependence on the horizon and model, critic errors. The result indicates that $h^*$ should increase when the model is more accurate and decrease when the critic is more accurate. However, this is only a rough guidance in practice since accurately quantifying these errors poses a significant challenge, making it difficult to determine an optimal $h^*$ schedule during training. **Question 1: Correlated randomness over time.** At each timestep $i$, the random variables $\zeta$ and $\xi$ are independently sampled and are not shared within a rollout trajectory. We will revise the notations in Line 129 to $\zeta_i, \xi_i$, and in Line 138 to $\zeta_{i,n}, \xi_{i,n}$. We sincerely appreciate the reviewer for bringing this to our attention. **Question 2: Are there other ways to trade off bias for variance?** - The reviewer is correct that one way to reduce gradient variance is to use Truncated BPTT. However, this approach can lead to a huge gradient bias, as it over-prioritizes short-term dependencies. This has been observed in some previous works [2]. - In order to further investigate this, we performed additional experiments and report the performance results in the **Figure 2 in the PDF**. These experiments demonstrate that Truncated BPTT struggles to achieve high returns. **Question 3: Would there be sufficient incentive to use longer expansion step + SN?** - Our experimental results demonstrated that longer expansion steps (e.g., $h=8$ in hopper and walker2d) with SN offers a better performance than choosing a short expansion steps (e.g., $h\leq 5$ as commonly used in previous methods). - The observed decrease in performance with larger $h$ is in line with our theoretical findings from Proposition 5.7 as the compounding model error becomes more pronounced. Therefore, it is advisable to decrease $h$ and rely more on the critic. Notably, SN is proposed to mitigate the optimization challenges, applying which the above tradeoff between model and critic error is valid. - Learning more accurate models to address the bias issue holds significant promise for future research. Once we develop such models, larger $h$ can become advantageous, as the model surpasses the critic in accuracy. **Limitation: The paper doesn’t have much discussion on the limitations.** While SN addresses the challenge of exploding gradient variance and enables longer model unrolls, learning more accurate longer-horizon models to fully exploit the gradient information remains an open problem. Besides, our experiments focus solely on SN as a smoothness regularization technique, and we acknowledge the need for further exploration of alternative designs. We hope the above response resolves your questions and would be happy to have further discussions if you have any additional comments. [1] Wang et al. ''NPG methods: Global optimality and rates of convergence.''\ [2] Xu et al. ''Accelerated policy learning with parallel differentiable simulation.'' --- Rebuttal Comment 1.1: Title: Follow-up on the rebuttal. Comment: Dear Reviewer epDC, As we are approaching the midpoint of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have. Your constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. Best, Authors
Rebuttal 1: Rebuttal: We conduct additional experiments to address Weakness 3 and Question 2 raised by **Reviewer epDC**, and Weakness 1 raised by **Reviewer eACF**. The results can be found in the attached PDF file. Pdf: /pdf/4cba79755888bc2253b1ece1389ad1d63673623b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Accept (poster)
Summary: This paper presents a comprehensive study on the induced sparse patterns across multiple large pre-trained vision and language transformers. The authors propose the existence of "essential sparsity" and present intriguing findings on abrupt sparsification during the pre-training of transformers and the effect of pre-training data on knowledge condensation. Strengths: The main strength of this paper is that it validates the ubiquitous existence of essential sparsity across large pre-trained transformer models of varying scales for vision and language, irrespective of the training strategy used for pre-training them. The authors used various datasets and models to validate the existence of "essential sparsity" and presented intriguing findings on abrupt sparsification during the pre-training of transformers and the effect of pre-training data on knowledge condensation. The authors also compared the performance of models with and without sparsification and found that one-shot sparsification without re-training does not significantly affect the performance of downstream tasks. In Section 6, the authors further carefully analyzed the connection between LTH and essential sparsity, and showed the former to become potentially less necessary or relevant in larger models. Those are valuable insights of broad interest to the sparsity research community. Based on the methodology and results presented in the paper, the experiment presented is sound enough to support the idea of the existence of essential sparsity in large pre-trained models (bert-base, OPT-125m, OPT-350m, OPT-1.3B). In particular, the authors present several quite surprising findings related to the existence of essential sparsity in large pre-trained transformer models. - Firstly, the authors found that BERT suddenly becomes heavily sparse after a certain number of training iterations, which is unobserved before and not well-understood. This finding suggests that there may be underlying “phase transition”-like mechanisms in the pre-training dynamics of transformers that are responsible for inducing sparsity, and further research is needed to understand it. - Secondly, the authors also found that BERT trained with a larger amount of pre-training data tends to have a better ability to condense knowledge in relatively fewer parameters. This finding is counter-intuitive because one would expect that increasing the amount of pre-training data would lead to an increase in the number of parameters required to capture the additional information. - Thirdly, the authors found that self-supervised learning (SSL) objectives trigger stronger emergent sparsification properties than supervised learning (SL). This finding is also intriguing because one would expect that supervised learning, which provides more explicit information to the model, would lead to better knowledge condensation. Weaknesses: - One main downside of this paper is that the paper did not report any result on hardware-friendly sparsity such as N:M, nor it discussed any GPU run time benefit from the induced essential sparsity (if any). While one can understand the study is mainly conceptual, just like the original LTH, I believe the real hardware support is especially important for LLM pruning/sparsity research due to their exploding costs - More baselines are desired – currently only LTH is reported. Would essential sparsity meaningfully outperform the simplest baseline of random pruning? How it compares with the pruning result of SparseGPT [18]? - In both SparseGPT [18] and Sparsity-May-Cry [75], it was found that larger LLMs are harder to prune. This paper seems to pinpoint the opposite conclusion. Could the authors elaborate why their conclusions seem to contradict [18,75], or not? - Sparsity has more benefits beyond efficiency, such as few-shot transfer, robustness to noisy label or other distribution shifts. Many of those were previously demonstrated under LTH or dynamic sparse training settings. I will be curious to see if essential sparsity will own the same merits too. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see the Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Not discussing more hardware-friendly sparse patterns Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for identifying the significance of our work and finding it promising and practical considering the massive scale of the recent large-scale models. We additionally appreciate that you found that our experiments are quite insightful, have surprising findings, and counter-intutive observations and can lead to opening up several research topics (phase transition etc.). To further address some weaknesses pointed out by you, we would like to address them point-by-point as below: **1. No results for hardware-friendly sparsity such as N:M:** Thank you for bringing up this point and we are glad to update you that we have additionally **explored fine-grained N:M structured sparsity** (https://arxiv.org/abs/2102.04010) (including widely accepted 2:4 sparsity pattern with real hardware acceleration) which suggests each contiguous block of M values, N values must be non-zero. To our favor, we found that **essential sparsity still holds for N:M sparsity** which can also be identified in a *training-free and data-free manner at FREE COST* bringing actual acceleration for large transformers. We have included **our results on N:M sparsity in Figure 2 of the rebuttal pdf**. **2. Large models are harder to prune?** We would like to clarify the confusion that Sparsity may cry [75] states that *on hard tasks (arithmatic reasoning, protein stability etc.), even large models can not be effectively pruned to high sparsity*. It doesn’t mean that the larger the model becomes, the harder it is to prune them. Instead, they claim that the more challenging the task, the more difficult it becomes to prune the model. Our work observations align with [75] without any contradiction, where we state that the essential sparsity range is dependent on the task complexity (line 193-197 of submitted pdf). Similarly for SparseGPT [18], they mention “*In general, there is a clear trend of larger models being easier to sparsify, which we speculate is due to overparametrization (paragraph 3 section 4.1)*” which have **no contradiction with our claims**. Moreover, another piece of evidence from Figure 2 from SparseGPT - "*One key positive finding, illustrated in Figure 2, is that larger models are more compressible: they drop significantly less accuracy at a fixed sparsity, relative to their smaller counterparts*" which doesn't contradict our observation. **3. More baselines are required like random pruning?** Thank you very much for raising this point and we completely agree that comparison with Random pruning is important. We have **uploaded new results of random pruning, random erk pruning in Figure 1 of the rebuttal pdf**. Note that random pruning performs significantly bad wrt. our one-shot magnitude pruning approach. We promise to include these results in the final version. In addition to your interest, **Figure 3(a) in our rebuttal draft** illustrates that our essential sparsity observations hold **true even for modern LLMs (Vicuna-B)**, sending a favorable signal about the hidden existence of high-quality sparse subnetwork which can be identified at free in dense pre-trained checkpoints. To further enrich our study, we replaced OMP with the recently proposed SparseGPT and found it to have generally consistent trends with OMP (**Figure 3(b) in our rebuttal draft**). In addition, it is interesting to observe that better-designed pruning strategies such as SparseGPT can further push the boundary of essential sparsity and identify better sparse subnetworks at comparatively higher sparsity ratios: yet at higher compute costs. We leave as future work on how to close the performance gap between OMP and SparseGPT. **4. Sparsity beyond Efficiency?** Thank you for bringing it up and we agree that sparsity has benefits beyond efficiency (eg. robustness, few-shot, etc). Unfortunately due to limited time for rebuttal, we leave this experiment for future (camera-ready) as it is not directly related to the primary scope of this work. We sincerely hope our responses have clarified many of your concerns, and please do not hesitate to let us know what else we could do in order to convince you of a rating upgrade. --- Rebuttal Comment 1.1: Comment: Thank you so much for the reply! I have read other reviews too. It appears that the authors have addressed all concerns properly. I think the paper has presented more-than-sufficient insight and back-up results to warrant its acceptance. N:M sparsity and Vicuna-7B results are even nicer additions. In particular, beyond LLM, this paper reports ViT results too, as well as pre-training dynamics from scratch (which were not observed in peer LLM pruning works). I also don't feel it necessary nor reasonable, to ask for some colossal model (LLaMA 65B, Bloom 175B) done within rebuttal time window. I am raising my score to 8 to champion this solid work. --- Reply to Comment 1.1.1: Title: Author Response to dYgU Comment: We are extremely glad that you find our work solid. We deeply appreciate and thank you for your strong support for the work and identifying its merits. We’re particularly grateful for your agreement on the sufficiency of our current/newly added experiments, and the impracticality of running ‘colossal models’ within a few days.
Summary: This research paper focuses on the following notion for large pre-trained models: "essential sparsity", the idea that a sharp drop in fine-tuning performance occurs after one-shot pruning relative to the level of sparsity. The authors propose that large and overparameterized models can be pruned without additional computational expense, showing this to be true across a range of tasks in both computer vision and natural language processing. Additionally, the authors found an interesting occurrence of abrupt sparsification during pre-training, indicating that models trained with larger datasets tend to achieve knowledge abstraction with fewer parameters. Their findings also revealed that self-supervised learning objectives tend to trigger stronger emergent sparsification properties than supervised learning. They argue that identifying and understanding these inherent high-quality sparse patterns could make fine-tuning large models more practical and environmentally friendly. Strengths: The paper makes a number of interesting observations about the notion of essential sparsity, its relation to "winning ticket" networks (see LTH), the idea that better knowledge abstraction may be feasible with fewer parameters (as long as we train with more data), and that self-supervised learning tends to have better emergent sparsification than supervised learning. Weaknesses: Unfortunately, these interesting observation mentioned in the Strengths section, are also too "empirical" or anecdotally shown in the paper. It is hard to be convinced that these are general results -- and they are not substantiated with any form of of theory or at least some explanations/justifications based on results from other papers. In general, I would describe this paper as "it is moving in a good direction but it is not ready for publication yet" -- at least not at the top-tier conference of deep learning. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: There is a completely different approach to get the benefits of essential sparsity -- namely to start with and train a sparse network. That would be even less computationally intensive than training a dense network and then doing OMP on that. Are the authors familiar with methods such as SynFlow, PHEW, Synflow++, etc that do pruning before training? If not, I suggest that they also consider those and compare the performance they get from OMP versus those networks. The two more interesting observations of the paper (the idea that better knowledge abstraction may be feasible with fewer parameters (as long as we train with more data), and that self-supervised learning tends to have better emergent sparsification than supervised learning) are presented very briefly, without a sufficiently deep analysis in my opinion. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: One limitation of course is that the claims of the paper are not substantiated in the case of really large foundation models used today -- but of course it is hard for academic researchers to experiment with the training of such models. Another issue is that the paper does not explicitly state the limitations of the proposed method. For example, in the end of section 6 there are some rather "hidden statements" about the benefits of LTH and IMP, which is a very important point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time to review our work. We would like to address all the weaknesses pointed out by you point-by-point below: **1. Experiments are too experimental and hard to be convinced that these are general results:** We would like to highlight that sparse neural networks are new frontier for deep learning theory, and limited theoretical works are available to augment experimental observations and algorithm designs. * Although we are heavily empirical, we would like to bring your attention to recent theoretical works in deep learning theory [https://arxiv.org/abs/2112.11027 , https://arxiv.org/abs/1909.05122 , https://arxiv.org/abs/2002.09277, https://arxiv.org/abs/1903.09367 ] which use the sparsity modeling to understand the implicit regulaization impact and over-parameterization of DNN with growing scale supporting our empirical observations. We promise to cite them in our final version to provide enough theoretical support for our observations. * About supervised vs self-supervised findings, our results are again consistent with prior work [https://arxiv.org/abs/2012.06908] on small-scale models, illustrating an important signal that self-supervised learning is more sparsity friendly. We conjecture that sparsity is one of the key structure prior for unsupervised learning [https://openreview.net/pdf?id=TJ2nxciYCk-, https://arxiv.org/pdf/2207.04630.pdf ]; and while we do not have a theoretical explanation yet (leaving it for future work), we believe that self-sup learning inherently induces better sparse patterns during training. * To further augment that our finding can be generalized to even modern-day large-scale models, we have attached **additional experiments with Vicuna-7B in the rebuttal pdf Figure 3**. The need to find a high-quality sparse subnetwork in a training-free and data-free approach is significantly important considering the exploding size of LLMs, where the conventional iterative prune-retrain strategy becomes impractical. * We have also attached **new results exploring fine-grained N:M structured sparsity** (https://arxiv.org/abs/2102.04010) (including widely accepted 2:4 sparsity pattern with real hardware acceleration) which suggests each contiguous block of M values, N values must be non-zero. To our favor, we found that essential sparsity still holds for N:M sparsity which can also be identified in a **training-free and data-free manner at FREE COST** bringing actual acceleration for large transformers. Please check **Figure 2 of the rebuttal pdf**. **2. Less computationally intensive way: start with a sparse network and perform training?** We apologize for the confusion, and we would like to highlight that **our work focuses on identifying the sparse patterns in pre-trained models**. Note that we perform OMP on a pre-trained checkpoint and this is *a training-free and data-free approach* (cheapest possible wrt. SynFlow or SynFlow++: Yes we are very familiar with Synflow) to sparsify the pre-trained checkpoint. Note that, first, we are not doing pre-training but simply pruning pre-trained models in resource-constrained environments to identify the sparse subnetwork for free. Secondly, during fine-tuning of the identified sparse subnetwork, we only do sparse training (as you suggested). We want to clarify that like OMP, SynFlow can be smoothly integrated with our settings. To your interest, *we applied Synflow and found that Synflow barely outperforms our extremely cheap approach* as shown in the **Figure 1 of our rebuttal pdf**. **KEY POINT** of this work is not to demonstrate sparsity, BUT it is to demonstrate *how easy this good sparsity could be achieved in large model at no cost* due to emergent behaviors. We are not competing with LTH or any other fancy pruning work, but we are interested in whether it is necessary to LTH or other expensive methods within the essential sparsity range. **3.Claims of the paper are substantiated for really large foundational models used today?** Thank you for bringing up this point and we provide **additional experiments with Vicuna-7B on popular MMLU benchmark in the rebuttal pdf Figure 3**. Based on our results in Figure 3, it is interesting to observe that our essential sparsity observations hold true even for modern LLMs, sending a favorable signal about the hidden existence of high-quality sparse subnetwork which can be identified at free in dense pre-trained checkpoints. We also provide new results which extend our observations for fine-grained N:M structured sparsity with real hardware potential. **4. Limitations are not explicitly outlined?** We really appreciate your concerns and we promise to add a separate limitation section in the paper explicitly mentioning the limitations of our work (such as the benefits of IMP in high sparsity regime, theoretical evidence, scaling up to 10B+ model parameters, etc.). We sincerely hope our responses have clarified many of your concerns, and please do not hesitate to let us know what else we could do in order to convince you of a rating upgrade. --- Rebuttal Comment 1.1: Comment: Thank you for generating some new results to address one of my comments. I appreciate that and I will increase my score from Borderline to Weak Accept (only considering the comments posted by the other reviews).
Summary: This paper defines essential sparsity and conducts various experiments to analyze the sparsity property of pre-trained models. Strengths: 1. This paper investigates the potential of directly sparsifying the pre-trained models 2. CV and NLP models are both explored. Weaknesses: 1. Evidence on large models is lacking. 2. Some conclusions are not new, e.g., sharp dropping point. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What about the conclusions on large models like LLaMA 65B or Bloom 175B? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The insights brought from this paper is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time to review our work. We appreciate your feedback and would like to address the concerns you raised regarding our work's weaknesses. However, we believe that there might be some key contributions that may not have been fully acknowledged in your assessment. We kindly suggest taking into consideration the viewpoints of other reviewers as well, as they might provide a more comprehensive perspective on our work. 1. To address your concerns related to evidence on large-scale models, we have included **new experiments on Vicuna-7B in the rebuttal pdf Figure 3** (note we have OPT-1.3B results in the submitted draft) which aligns with our observations in the paper. Secondly, we would clarify that although the abrupt drop phenomenon is observed previously; we would like to highlight that our key point is to study this observation in a multi-dimensional way wrt training strategy, pruning strategy, model-scale, data modality, and dataset size. Not just extending to pre-trained models, our multi-dimensional study reveals many interesting findings such as sparsity ratios below the abrupt drop marker are agnostic to pruning strategies (iterative vs OMP) and you do not require any fancy and expensive method to identify high-quality sparse subnetworks. At a glance, it might look simple, but we believe this has a huge practical implication considering the impracticality to perform iterative prune and retrain strategy with modern scale LLMs. Finally, we kindly hope you understand that the request for conclusions on LLaMa 65B and Bloom 175B couldn’t be afforded without large-scale industry-scale hardware. 2. **Conclusions are not new?** We respectfully disagree. As nicely summarized by other reviewers: **(a) Reviewer XMXq:** *“...this paper opens up several research topics to be studied. I believe they will be impactful…”* **(b) Reviewer Azn4:** *“...comparison of self-supervised vs fully supervised models is interesting…”* **(c) Reviewer QH66:** *“...paper makes a number of interesting observations about the notion of essential sparsity…”* **(d) Reviewer dYgU:** *“...the authors present several quite surprising findings related to the existence of essential sparsity…”*. Almost all other reviewers found that our empirical observations have wide practical importance to find and utilize the pre-existing sparsity of transformers. Additionally, we believe our observations related to abrupt sparsification during the pre-training process, supervised vs self-supervised, etc also open exploration playground to theoretically understand and design efficient pre-training strategies for researchers. We sincerely hope our responses have clarified many of your concerns, and please do not hesitate to let us know what else we could do in order to convince you of a rating upgrade. --- Rebuttal Comment 1.1: Title: Author response to Vvp4 Comment: Dear Reviewer Vvp4, We thank you for your time to review our work and your constructive comments to improve it, and we really hope to have a further discussion with you to see if our response solves your concerns. We have replied to the important points raised by you such as novelty concerns, limited experiments for large-scale models etc., in our rebuttal response. Since the author-reviewer discussion period has started for a few days, we will appreciate it if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. We again thank you for your time and efforts. Best, Authors --- Reply to Comment 1.1.1: Title: 2nd Reminder on feedback Comment: Dear Reviewer Vvp4: This is our 2nd reminder for you to please read our rebuttal and hopefully update your opinion. As the deadline for the discussion period is approaching, we would appreciate it if you could kindly let us know whether any further questions remain. We’re very confident that our rebuttal responses will address your concerns. We would again like to highlight that we have addressed your concerns related to large models and additionally provided more experimental results related to structured N:M sparsity patterns, which significantly increase the value of our work. Authors, --- Rebuttal Comment 1.2: Comment: 1. Thank the authors for adding Vicuna-7B experiments. However, we think it can not be defined as very large pre-trained models. 2. As for the shart drop point, we can refer to the paper such as https://arxiv.org/pdf/2301.00774.pdf. In figure 1, it has revealed an obvious drop from certain sparsity rates. --- Reply to Comment 1.2.1: Title: Response to Comment by Reviewer Vvp4 Comment: We would like to thank you for your time to read our rebuttal and responding back. **1. Vicuna-7B is not considered as very large pre-trained model:** We value your concern BUT we would again like to emphasize that 65B or 175B experiments are seriously impractical without the availability of big industry-scale hardware support. Considering your response came very close to the deadline of the discussion period (~1 day before it ends), we would not be able to secure the required hardware support and complete the experiments within the deadline. However, we promise to scale our findings to 65B in the final version of our paper. In addition, we are running experiments for Vicuna-13B and it is expected to complete within the next 12 hours and we will update our results as soon as we get it. We sincerely hope, you can understand the hardware constraints and several concurrent works (eg. LLM pruner https://arxiv.org/abs/2305.11627) also restrict their findings to 7-10B scale models. **2. Sharp-drop point behavior:** Thank you for raising this concern, and we certainly agree that *almost all pruning methods will observe their performance drop after a certain sparsity level - there is not any surprise nor our main finding*. Unfortunately, it seems **you have missed the key message of our work**. * The primary goal of this paper is to show the easiest pruning option, *one shot, magtitude-based, training-free, and data-free pruning* (different from SparseGPT who also requires calibration data and the more costly Hessian estimation) echoing exactly the same behavior as any sophisticated method like LTH within a sparsity range. **The most important finding, as overlooked as prior work, is summarized** as: *within the induced sparsity range induced by “essential sparsity”, the simplest possible pruning technique as aforementioned performs the same good as any fancy technique like LTH. and even their identified sparse masks are extremely similar*. **This is NEVER revealed by any other work, and it seems other reviewers have appreciated this main merit.** * Orthogonal to SparseGPT or any other pruning method: it for first time we reveals how **“easy”** large model pruning is. At least within certain sparsity range, one need not look beyond the *simplest one shot, magtitude-based, training-free, and data-free pruning* - that both presents a strong baseline for future pruning and reveals a strong “in-situ” pruning option (e.g. pruning is as simple as mag sorting). Hence our finding comes with profound practical value too, e.g., for cheap “on-the-fly” LLM pruning at test time adaptive to varying resource availability. * We provide some surprising and counter-intuitive findings related to **emerging abrupt sparsification of BERT during pre-training, sparsity dynamics in supervised vs self-supervised settings, first-time controlled pre-training data experiments** which illustrate more data make models more sparser, etc. We also included ViT experiments. All other reviewers have appreciated the significance of our interesting findings and the solidness of our thorough experiments. We reiterate to include Vicuna-13B experiments within next 12 hours, and sincerely hope that you will give a look at it before making your final decision. We hope for the best and still open to clarify any doubts if you have to convince you for a rating upgrade before the clock ticks out.
Summary: The paper postulates existence of "Essential Sparsity" in large pre-trained transformer models. The "Essential Sparsity" is defined by the paper as a sparsity threshold beyond which further pruning of weights leads to a large performance drop. It considers (1) one-shot pruning and (2) lottery ticket pruning of networks with fine-tuning. Experiments are conducted on both large language and vision transformer models. Strengths: While the sparsity has been studied a lot in the neural network literature, this work studies the sparsity for pre-trained language and vision models. Given the wide adoption of these networks, the study is important. The paper covers a number of experiments for large vision as well as language models. The comparison of self-supervised vs fully supervised models is interesting. Weaknesses: 1. Existence of sparsity in transformers in both language [a,b] and vision [c] has been discussed in the prior literature. The drop of performance in networks beyond certain threshold has also been covered in the literature. I do believe the current work is the first one to formally define it, however it will be good if the work can put the contributions in proper context. 2. Many of the "surprising" findings of the work appear to be over-claimed. For instance, i. The abrupt drop in the performance of networks beyond certain sparsity thresholds is shown by the original LTH paper and the current work extends it to pre-trained network. ii. The similarity of sparse masks for various downstream tasks is also not very surprising given that the networks was pre-trained on a large massive dataset and the downstream tasks comprise of very small similar sets of data (often even the subset of pre-training data). iii. L50 (and L66) says that "a significant proportion of the weights in them can be removed for free" while it is hardly free given the network has been heavily pre-trained. 3. The mathematical definition of Essential Sparsity is somewhat unclear (what exactly does $1-\frac{||m||_0}{|m|}$ represent?). Also calling it essential for the network is a bit misleading. I agree dropping it would hurt the performance of the network but it is not "essential" for the network to perform well. A fully dense network usually performs the best. 4. Minor fix - Labels for x-axis is missing in most of the figures. References: a. Chen, Tianlong, et al. "The lottery ticket hypothesis for pre-trained bert networks." NeurIPS 2020 b. Dettmers, Tim, et al. "Llm. int8 (): 8-bit matrix multiplication for transformers at scale." NeurIPS 2022 c. Girish, Sharath, et al. "The lottery ticket hypothesis for object recognition." CVPR. 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: While some of the analyses of the paper is interesting, I would appreciate authors comments on above weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper doesn't cover discuss the limitations of the study in details. For example, while the sparsity and efficiency of models both during training and inference is an important problem, it should be kept in mind that unstructured sparsity alone is not good enough to obtain speed gains. There is no discussion of potential speed gains with the unstructured sparsity (this doesn't necessarily mean FLOPs but speed improvement during inference/training), or training time of different networks, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time to review our work and glad that you find our work important and that some of our observations are very interesting. We would like to address all the weaknesses pointed out by you point-by-point below: **1. Existence of sparsity in transformers in both language and vision has been discussed in the prior literature?** Thank you for outlining your concern. We would like to emphasize that although there exist several works discussing the existence of sparsity in transformers: * Our work primarily focuses to bring attention to the pre-existing sparse patterns in a **TRAINING-FREE** and **TASK-FREE** setting which is critically important considering the exploding scale of transformers. As pointed out by reviewers XMXq and dYgU, our observations provide a promising and practically efficient approach to finding and utilizing the pre-existing sparsity of transformers and potentially open up discussion around many research topics (eg. phase transition, etc). * In addition, we are the first to scale up our observations for sparsity **at billion-level transformers** (additional results in rebuttal pdf **include Vicuna 7B results - check Figure 3**) and illustrate that you do not require any iterative pruning and retraining in the essential sparsity range. We for the first time provide empirical evidence that sparse masks obtained by expensive iterative prune and retrain procedures and simple OMP without any retraining are significantly similar indicating LTH is not doing anything magical in essential sparsity regimes and OMP can identify matching subnetworks. * Lastly, our observations related to abrupt sparsification during the pre-training process, supervised vs self-supervised, etc also open exploration playgrounds to theoretically understand and design efficient pre-training strategies for researchers. **NOTE:** We also include new results on Vicuna-7B to illustrate the existence of essential sparsity within them; indicating a sound message that there potentially **exists an easy way to sparsify modern-day LLMs without access to any training data, no retraining, in one-shot fashion** before observing abrupt performance drop. **KEY POINT** of this work is not to demonstrate sparsity, **BUT** it is to demonstrate how easily this good sparsity could be achieved in the large model at no cost. We are not competing with LTH or any other pruning work, but we are interested in whether it is necessary to use LTH or other expensive methods. **2. Over-claimed observations:** * **Abrupt drop is observed by original LTH paper:** We would clarify that although an abrupt drop in the performance of networks beyond certain sparsity thresholds is shown by the original LTH paper; our key point is to study this observation in a multi-dimensional way wrt. training strategy, pruning strategy, data modality, and model-scale. Not just extending to pre-trained models, our multi-dimensional study reveals many interesting findings such as sparsity ratios below the abrupt drop marker are agnostic to pruning strategies (iterative vs OMP) and you do not require any fancy and expensive method to identify high-quality sparse subnetworks. At a glance, it might look simple, but this has a huge practical implication considering the impracticality to perform iterative prune and retrain strategy with modern scale LLMs. * **Similarity of sparse mask for various downstream tasks is not interesting:** We would like to highlight that unlike developing downstream task-dependent sparse mask (https://arxiv.org/pdf/2303.14409.pdf, https://arxiv.org/abs/2012.06908, etc), our work again brings focus on the key question: *Is always required to have a task-dependent mask which only works for single task considering the computational cost involved in identifying them?* Our observations find that within the essential sparsity range, it is not required to search for a task-dependent mask and a cheap one-shot mask is as good as an expensive task-dependent mask identified by IMP of LTH. * **Our argument of free is hardly free given the network has been heavily pre-trained?** We apologize for the confusion, and we think there is a misunderstanding of the term “for free” in the wrong context. When we say “for free”, given a pre-trained model, with our OMP settings you do not need an additional computational budget to iteratively prune and retrain the model weights to identify sparse subnetworks (impractical for modern LLMs without industry-standard hardware), thereby no pruning overhead. **3. Limitation [unstructured sparsity is not good enough]:** Thank you for bringing up this point and we are glad to update you that we have **additionally explored fine-grained N:M structured sparsity** (https://arxiv.org/abs/2102.04010) (including widely accepted 2:4 sparsity pattern with real hardware acceleration) which suggests each contiguous block of M values, N values must be non-zero. To our favor, we found that **essential sparsity still holds for N:M sparsity** which can also be identified in a training-free and data-free manner at FREE COST bringing actual acceleration for large transformers. We have included our results on N:M sparsity in **Figure 2 of the rebuttal pdf**. **4. Essential for network is misleading?** We would like to clarify the confusion. When we say essential sparsity it does not mean it is essential for the network. However, it means that the network reveals robustness to sparsification and its performance doesn’t significantly drop wrt. pruning with the essential sparsity range. We follow previous works (https://arxiv.org/abs/1901.09181, https://arxiv.org/abs/1902.09574, https://arxiv.org/abs/2101.09048) and define the model sparsity as $1-\frac{\|m\|_0}{|m|}$, where $|m|_0$ is the number of non-zero weights and $|m|$ refers to the total number of weights. Hope our rebuttal clarifies all your doubts and please let us know what else we could do to convince you of a rating upgrade. --- Rebuttal Comment 1.1: Title: Author Response to Reviewer AZn4 Comment: Dear Reviewer AZn4, We thank you for your time to review our work and your constructive comments to improve it, and we really hope to have a further discussion with you to see if our response solves your concerns. We have replied to the important points raised by you such as novelty concerns, limited gain from unstructured sparsity, over-claimed observations, clarification regarding the definition of essential sparsity etc., in our rebuttal response. To augment our rebuttal response, we would like to further add more clarification related to novelty concerns of abrupt drop in our work. We certainly agree that *almost all pruning methods will observe their performance drop after a certain sparsity level - there is not any surprise nor our main finding*. We would like to highlight the key message as follows: * The primary goal of this paper is to show the easiest pruning option, *one-shot, magnitude-based, training-free, and data-free pruning* echoing exactly the same behavior as any sophisticated method like LTH within a sparsity range. **The most important finding, as overlooked as prior work, is summarized** as: *within the induced sparsity range induced by “essential sparsity”, the simplest possible pruning technique as aforementioned performs the same good as any fancy technique like LTH. and even their identified sparse masks are extremely similar*. **This is NEVER revealed by any other work, and it seems other reviewers have appreciated this main merit.** * For first time we reveal how **“easy”** large model pruning is. At least within a certain sparsity range, one need not look beyond the *simplest one shot, magnitude-based, training-free, and data-free pruning* - that both presents a strong baseline for future pruning and reveals a strong “in-situ” pruning option (e.g. pruning is as simple as mag sorting). Hence our finding comes with profound practical value too, e.g., for cheap “on-the-fly” LLM pruning at test time adaptive to varying resource availability. * We provide some surprising and counter-intuitive findings related to **emerging abrupt sparsification of BERT during pre-training, sparsity dynamics in supervised vs self-supervised settings, first-time controlled pre-training data experiments** which illustrate more data make models more sparser, etc. We also included ViT experiments. All other reviewers have appreciated the significance of our interesting findings and the solidness of our thorough experiments. Since the author-reviewer discussion period has started for a few days, we will appreciate it if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. We again thank you for your time and efforts. Best, Authors --- Reply to Comment 1.1.1: Title: 2nd Reminder on feedback Comment: Dear Reviewer AZn4: This is our 2nd reminder for you to please read our rebuttal and hopefully update your opinion. As the deadline for the discussion period is approaching, we would appreciate it if you could kindly let us know whether any further questions remain. We’re very confident that our rebuttal responses will address your concerns. We would again like to highlight that we have addressed your concerns related to novelty, limited gain from unstructured sparsity, over-claimed observations, clarification regarding the definition of essential sparsity etc., and additionally provided more experimental results related to structured N:M sparsity patterns, which significantly increase the value of our work. Authors,
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time to review our work and offering important suggestions. In this pdf, we attach some key experiments requested by reviewers which further strengthen the impact of our work. We summarize our results as follows: * **[Requested by Reviewer QH66 and dYgU] Figure 1** provides additional results related to the performance comparison of SynFlow, and the important baseline of Random Pruning + Random pruning with ERK; and illustrates that SynFlow doesn't bring any additional benefit despite being more expensive than OMP. * **[Requested by Reviewer XMXq, AZn4, dYgU BUT for all] Figure 2** provides additional results exploring fine-grained N:M structured sparsity (https://arxiv.org/abs/2102.04010) (including widely accepted 2:4 sparsity pattern with real hardware acceleration) and show our essential sparsity observations holds true for N:M sparse patterns. * **[Requested by Reviewer Vvp4, QH66, dYgU BUT for all] Figure 3** provide additional experiments with *Vicuna-7B on popular MMLU benchmark*. Based on our results in Figure 3, it is interesting to observe that our essential sparsity observations hold true even for modern LLMs, sending a favorable signal about the hidden existence of high-quality sparse subnetwork which can be identified at free in dense pre-trained checkpoints. In last, we again thank all the reviewers and hope our additional results + rebuttal responses can clarify their doubts. We are more than happy to provide any further explanations required. Best, Authors 9125 Pdf: /pdf/3333e2712e621a4d688ce7d13fa0ed093b8bf701.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper delves into the natural sparsity of large-scale pretrained transformers that could be found without additionally training the models with sparsity optimization targets. The authors find that when simply low-magnitude weights are zeroed out, 1) up to a certain sparsity level, the pruned transformers do not exhibit performance drop on downstream tasks after fine-tuning. 2) the sparsity of weights abruptly increases at certain training iterations 3) simple magnitude-based pruning shows similar performance as the Lottery Ticket Hypothesis method up to a certain sparsity level. The phenomenon is observed across many different LLM and vision transformer architectures. Also interestingly, self-supervised weights are more sparse than supervised weights. Strengths: This paper presents interesting findings on the nature of pretrained weights of large-scale models based on substantial amount of experimental grounds. The authors emphasize that the use of essential sparsity to prune weights can be done without repetitive train-prune-retrain routine. Considering the massive scale of recent large-scale transformers, it is a promising and practically efficient approach to find and utilize the sparsity of transformers. In addition, this paper opens up several research topics to be studied. I believe they will be impactful. - Sparsity pattern from self-supervised learning and supervised learning - Sparsity pattern change by training on larger-scale datasets - Acceleration of large-scale transformers by pruning weights Weaknesses: - Practical gains after finding the sparsity pattern In contrast to learning-based pruning methods (such as LTH), the essential sparsity has the raw pattern of weight magnitude. Although it is interesting to find the sparsity is there by nature, as mentioned in L90-91, such patterns can’t be directly used to bring actual acceleration of neural networks. Training-based methods can set optimization goal on sparsity patterns to alter the sparsity pattern to make the neural network accelerable, however, such approach seems to be difficult for essential sparsity. While it is easier to find the sparsity pattern with essential sparsity, what would be the practical gains of the proposed method after finding the sparsity? - Why does sparsity occur? The authors present interesting comparison of impact of sparsity on model accuracy on various conditions - model size & architecture, supervised vs self-supervised learning. It is interesting to know of such results, but why does such differences happen? How can we understand the phenomenon? - typos L36 pre0trained -> pre-trained Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative societal impact is expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for identifying the significance of our work and finding it promising and practical considering the massive scale of the recent large-scale models. We additionally appreciate that you found that our experiments are quite solid and our observations can lead to opening up several research topics. To further address some weaknesses pointed out by you, we would like to address them point-by-point as below: **1. Limited practical acceleration due to unstructured sparse patterns?** Thank you for bringing up this point and we are glad to update you that we have additionally explored fine-grained N:M structured sparsity (https://arxiv.org/abs/2102.04010) (including widely accepted 2:4 sparsity pattern with real hardware acceleration) which suggests each contiguous block of M values, N values must be non-zero. * To the surprise in our favor, we found that essential sparsity **still holds for N:M sparsity** which can also be identified in a training-free and data-free manner at FREE COST bringing actual acceleration for large transformers. We have included our results on N:M sparsity in Figure 2 of the rebuttal pdf. * We would like to address that although optimization goals on sparsity patterns can be set up during training; it is impractical for this to be adopted to current gigantic transformers without industry-scale hardware. * On the other hand, the pivotal contribution of the essential sparsity is to bring attention to FREE of COST pre-existing sparse patterns without any training requirements. **2. Why does sparsity occur?** Thank you for raising this question. Prior literature has consistently observed that higher sparsity comes at a natural consequence of model size. [https://openreview.net/pdf?id=TJ2nxciYCk-] found that large transformers often have unactivated sparse neurons (activation space) and our results can be viewed as a counterpart in weight space. Similarly, [https://cbmm.mit.edu/sites/default/files/publications/Theoretical_Framework__How_Deep_Nets_May_Work_15.pdf] argues that certain deep architectures – such as CNNs and transformers work very well because they significantly exploit the general property of compositional sparsity. In addition, in deep learning theory [https://arxiv.org/abs/2112.11027 , https://arxiv.org/abs/1909.05122 , https://arxiv.org/abs/2002.09277, https://arxiv.org/abs/1903.09367 ] there are many work using the sparsity modeling to understand the implicit regularization impact and over-parameterization of DNN with growing scale. Regarding your point about supervised vs self-supervised, our results are consistent with prior work [https://arxiv.org/abs/2012.06908 ] on small-scale models, illustrating a important signal that self-supervised learning is more sparsity friendly. We conjecture that sparsity is one of the key structure prior for unsupervised learning [https://openreview.net/pdf?id=TJ2nxciYCk-, https://arxiv.org/pdf/2207.04630.pdf ]; and while we do not have a theoretical explanation yet (leaving it for future work), we believe that self-sup learning may inherently induce better sparse patterns during training. Finally, thank you for pointing out some typos, and we promise to correct them all in the camera ready version along with a more detailed related section discussing the above prior works. We hope our responses have clarified many of your concerns, and please do not hesitate to let us know what else we could do in order to convince you for a rating upgrade. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal, I still think the essential sparsity to be far from being practical. In rebuttal Figure 2, the authors present performance drops from several different N:M sparsity patterns. However, I don't think any of the other N:M patterns other than 2:4 are valid. Albeit general N:M sparsity is theoretically possible, they require hardware support (such as tensor cores) to have actual acceleration. 2:4 sparsity has been supported by GPU hardware for a few years, however, no other patterns have been supported. Some patterns where n or m are powers of 2 are theoretically explored, however, the patterns in Rebuttal Figure 2 (1:10, 1:6, 2:10) are not practical. (Is it 1:10 or 9:10?) * A. Zhou et al., LEARNING N:M FINE-GRAINED STRUCTURED SPARSE NEURAL NETWORKS FROM SCRATCH, ICLR 2021 As 2:4 sparsity is losing accuracy, I don't consider it to be safely accelerable without additional training. Given that the authors could provide more objective comments in the revision, I am ok with it. I still find it an interesting paper. I'm between Accept and Weak Accept.
null
null
null
null
null
null
Sample-Efficient and Safe Deep Reinforcement Learning via Reset Deep Ensemble Agents
Accept (poster)
Summary: Overfitting in deep Q-learning agents is a recent topic of interest in the RL community, and several methods have been proposed to mitigate this problem, including data augmentation (DrQ), random ensembles (RedQ, DroQ), and resets (Nikishin et al.). This paper builds upon prior work on periodically resetting weights and addresses one of its core limitations: by resetting weights, an agent will perform poorly immediately after resetting, although it eventually recovers and often exceeds its performance before resetting. This paper proposes to learn an *ensemble* of agents and periodically reset only one of the agents at a time (in sequence). Experiments are conducted on tasks from DMControl, Minigrid, and Atari100k, and indicate that resetting with ensembles is effective at mitigating deterioration of performance immediately after a reset. Strengths: This paper is well written, considers an interesting problem, and experiments appear sound. The description of the proposed method is easy to follow, and especially Figure 4 is useful for understanding how the ensemble resetting works in practice. Weaknesses: - It would be useful to include more discussion on related works that seek to understand and mitigate overfitting in deep Q-learning besides resetting. There is a lot of literature in this area and I imagine that the authors are familiar with the literature, so I will refrain from mentioning any specific references (besides the methods mentioned in my summary) to remain impartial. - I would like to see more ablations to get a deeper understanding of the trade-offs in ensemble resetting vs. prior works. How does sample-efficiency and performance drop change when the number of ensemble agents $N$ varies? How often should one reset agents? Is the rate of resets and number of agents dependent on the replay ratio? What is the computational cost of using additional agents vs. increasing the replay ratio? Would the proposed method benefit from using the ensemble in other ways as well, e.g. by using the RedQ trick for computing TD-targets? Addressing some or all of these questions would likely increase the impact of the work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like the authors to address my comments listed in "weaknesses". If the authors are not able to conduct some or all of the experiments required to answer my questions during the rebuttal, I'd like to see a discussion of what the authors would expect the results to be based on their experience. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is sufficient discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Regarding "Related works"** - We will include related works regarding overfitting problem in RL in the final manuscript, as the reviewer recommended. **Regarding "Sample efficiency and performance collapse with respect to the number of ensemble agents"** - We have conducted additional experiments concerning the number of ensemble agents, as detailed in the common response. As depicted in Figure 3 of the rebuttal PDF file, the results demonstrate that increasing the number of ensemble agents, denoted as $N$, improves both final performance and sample efficiency. This improvement is attributed to each agent being trained with experiences from a diverse set of ensemble agents, contributing to increased diversity. Moreover, the result shows that a higher $N$ leads to more efficient mitigation of performance collapse. **Regarding "Reset interval"** - We have included additional experiments concerning the reset interval, as outlined in the common response. It is seen that highly frequent resets can negatively impact learning. Consequently, determining an appropriate reset interval, a critical hyperparameter, is imperative. Importantly, our proposed method consistently outperforms the vanilla reset method across all considered reset intervals. **Regarding "Computational costs of increasing the number of ensemble agents and the replay ratio"** - Computational costs increase linearly as we raise the number of ensemble agents or the replay ratio. A limitation of our approach is the escalated computational cost due to ensemble agents. Nonetheless, it is important to highlight that, typically, the challenge in reinforcement learning lies more in sample efficiency, owing to the substantial costs tied to environment interaction, rather than computational intricacies. We are convinced that our method substantially enhances sample efficiency and safety, especially in environments with ample computational resources. We will include this discussion in the final manuscript. **Regarding "relationship between (the rate of resets, the number of agents) and the replay ratio"** - As discussed in our work, the reset method allows us to increase the replay ratio, resulting in improved performance. Therefore, a higher replay ratio allows for a higher rate of reset (more frequent resets). The number of ensemble agents is independent of the replay ratio, but we need to choose the appropriate number of ensemble agents and replay ratio within our computational budget, as both increase computational costs. **Regarding "Benefit from other ensemble learning methods"** - Ensemble learning offers a range of advantages, including improved exploration [4-1] and reduced estimation variance [4-2], depending on its purpose and approach. In our work, we leverage ensemble learning to prevent performance collapse in reset methods and attain diversity gains. Additionally, we concur that ensemble learning could yield further benefits, like diminished variance during Q-function estimation, as pointed out by the reviewer. While we recognize the potential impact of the recently reinitialized agent on ensemble learning benefits, this aspect could be explored in future work, prompting us to consider slight modifications. [4-1] Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning [4-2] Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning --- Rebuttal 2: Title: An Invitation for Further Discussion Comment: As the discussion stage is drawing to a close, we are eagerly awaiting your comments and suggestions. We believe that our responses, along with the additional experiments conducted during the rebuttal period, have effectively answered your questions, thereby enhancing the clarity of our work. We are grateful for the valuable suggestions and questions provided by the reviewer. Thank you for your valuable time and feedback.
Summary: This paper combines the resetting method proposed by (Nikishin et al., 2022) as a remedy to the primacy bias affecting deep RL algorithms with the use of ensembles of agents. The proposed RDE method, apart from generally improving performance, has the goal of minimizing the regret associated to a learning agent that uses periodic resets, mitigating the severeness of the performance drops it experiences right after a reset. To do this, without sacrificing too much on exploration and online data collection, the probability of executing an action in the environment is evaluated according to the oldest value function. Empirical results show benefits in using this approach, at both low and high replay ratio, in standard robotic locomotion and navigation tasks, as well as a safety domain. Strengths: **Originality**: despite the combination of resets and ensembles is not particularly original, the focus on developing a technique for leveraging their combination to avoid performance collapse while resetting is, to the best of my knowledge, new. **Quality**: the quality of the work is generally good. The experiments cover a reasonable number of domains and the ablations mostly answer natural questions. **Clarity**: the clarity of the writing is good. The paper would benefit from some small tuning to the presentation here and there, but the overall flow makes clear what the contribution is. **Significance**: harnessing the performance benefits originating from the mitigation of the primacy bias while at the same time not incurring a cost in terms of regret is a worthy research direction which could be interesting to many practitioners and researchers. Weaknesses: **Major Concerns** - If I understand it correctly from Algorithm 1, a different policy could be potentially selected at each step in the environment. This could conceptually create problems in terms of inconsistent behavior, since each policy will have to deal with the actions previously sampled from a potentially very different policy, and in terms of lack of "deep exploration", since this would harm temporally-consistent behaviors. A reader would benefit from a discussion or analysis of this aspect, to understand whether this is happening at all, or, if not, why that might be not happening in this kind of tasks of setting. - The idea of combining periodic reset and ensembles of agents in continuous control has been explored in "Unleashing The Potential of Data Sharing in Ensemble Deep Reinforcement Learning" (Lin et al, 2022). I find the idea of the paper of using this combination to avoid performance drops to still be valuable, but discussing the relationship with that paper could better contextualize the contribution. **Minor Concerns** - Y and X labels are missing from all Figures in the paper, making it hard to parse the plots at first sight. For most plots, it is either quite easy to infer the quantities of interest, or they are explicitly mentioned in the caption, but it can still be very misleading for many readers. - The paper would benefit from the extension of some of the ablations to more tasks. In particular, I find the performance of the ensemble-based approach without any resetting on top to be an important baseline to contextualize the results in Figure 3, and I think results concerning it would be a good addition to the Figure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I ask the authors to provide answers to my concerns expressed above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss any computational consideration resulting from their use of ensembles of agents. I encourage the authors to add such a discussion to the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Regarding "Inconsistent behavior"** - It is entirely true that a different policy can be chosen at each time step. While this might result in inconsistent behavior, such inconsistency doesn't negatively affect exploitation since our method primarily relies on off-policy learning (the resetted policy is trained using experiences generated by the previous RL agent). Moreover, in terms of exploration, diverse policy behaviors can improve the process by generating new trajectories that have not been encountered before. In specific, we typically introduce Gaussian noise to actions in continuous action domains to encourage exploration, which can also potentially constrain the exploration of an RL agent. Through the utilization of policies from other ensemble agents, we expect that the adaptively composited agent can visit unexplored state-action spaces. We plan to include a discussion on this discussion in the final manuscript. **Regarding "Ensemble agents without reset"** - We have already compared our method to the ensemble-based approach without reset, and you can see the corresponding result in Figure 5 (purple line) of the main paper. This demonstrates that the combination of our ensemble learning and the reset method enhances performance. **Regarding "Relationship with a prior work"** As the reviewer mentioned, [3-1] also combines ensemble learning and the reset method. However, [3-1] primarily focuses on exploiting diversity gain by sharing data among ensemble agents, while our approach concentrates on both enhancing diversity gain and preventing performance collapses. The specific differences are as follows: - [3-1] employs parallel learning, training $N$ ensemble agents across $N$ corresponding environments. In contrast, RDE adapts and combines ensemble agents into a single agent within a single environment. - [3-1] resets ensemble agents simultaneously, whereas RDE performs a sequential reset to prevent performance collapse. - Additionally, we provide results in various RL domains, encompassing both discrete and continuous action spaces, as well as safe RL benchmarks. **Regarding "Figures"** - We will make more concrete revisions to the figures based on the comments from the reviewer. [3-1] Lin et al., "Unleashing The Potential of Data Sharing in Ensemble Deep Reinforcement Learning," arxiv, 2022 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I am satisfied with the response from the authors. Provided that they will do the modifications to their paper (concerning figures, related work and other tweaks proposed in responses to other reviewers), my opinion is that the paper should be accepted. I am raising my score. --- Reply to Comment 1.1.1: Title: Thanks for the re-evaluation Comment: We are thankful for the increased score. We will improve our paper based on the reviewers' comments.
Summary: The work proposes an extension to the resetting strategy proposed by Nikishin et al. The extension intends to mitigate the catastrophic performance collapse often observed for the simple resetting strategy while keeping the properties that help avoid the primacy bias. To this end, the work makes use of ensembling techniques, such that the policy/value network of an RL agent is not completely reset, but can fall back to different checkpoints. The method is evaluated on a variety of environments and a further extension is presented that enables application in safety critical systems. Strengths: * The presented method cleverly combines ensembling with resetting to improve RL agents. * It seems general enough that it should be usable as a plug and play method for a broad variety of RL agents. * The work mostly easy to follow * It is demonstrated that the method is flexible and can incorporate auxiliary information, such as safety-critical information to provide a safer method than the vanilla resetting one Weaknesses: The presentation could be improved: * A lot of the discussed "preliminaries" seem irrelevant for the content of the paper. For example, the paragraph on "Off-policy RL." seems not necessary and the content in "Primacy Bias and Resetting Deep RL Agent." is largely a repetition of the introductory text. * Algorithm 1 is never discussed in the text and feels wholly redundant with Fig. 1. This half page might be used to show more experimental results. * Algorithm 1 is not consistent with the text. From the Algorithm it looks like every ensemble member is reset every $N\times T_{reset}$ time-steps and not $T_{reset}/N$ as stated in line 149. * Lines 178 - 180 are concerned with expressing that the "oldest Q-function" is used to normalize in the selection mechanism. This can be expressed in much clearer terms "oldest Q-function" as is shown in line 183. * In line 195 it is claimed that RDE effectively prevents performance collapses, however I disagree with this wording. It can mitigate it to some extent, but the experiments clearly show that there is still performance collapses happening. * The description of Figure 2 is wholly confusing. The long sentence explaining what the y-axis is showing is expressed in a very convoluted way. * In figure 2c it should not be possible that the performance of the baseline "Base" is below 1 when using an RR of 1. * Design decisions are often not well enough explained. * Figure 1 does not explain what RR is * The paragraph heading in line 204 should be "Baselines & RL Agents" not "Baselines & DNN Architectures" * Line 246 claims that there is no significant drop on the humanoid-run example. The reward for the RDE method drops from 150 - 100. This represents a significant drop in my opinion since 1/3 of the performance is lost. * The choice for designing the selection mechanism is very unclear. The discussion about using the oldest Q-function in the selection mechanism seems more to point to only using the oldest Q-function. Indeed, this seems to be supported in the experiments and should have been at least an ablation. Additionally the initial paragraph of Section 3.2 seems to also point to only using the oldest Q-function * It is claimed that a $\beta$ set to 50 "nearly eliminated" performance collapses (line 270). However, the performance collapse in the Figure is again the 1/3 loss in performance. * Section 5 feels like an added afterthought. I don't see why it warrants an own section. It repeats some of the discussion of safe RL from the preliminaries section. Instead the "safe" selection mechanism should have been discussed in Section 3 and then only the results should be a subsection of Section 4. * Where does the value for the reset frequency $4\times 10^5$ in line 221 come from? The experiments seem to have an unfair comparison and are likely not reproducible: * All details about how hyperparameters were determined seems to be missing * The value of $N$ is never stated. Only from the plots can it be assumed to be 2 * The ablation does not take into account all confounding factors and should be redone * It is claimed that using a reset frequency for the individual members being equal to the single network case is fair. I fail to see how this is fair. This seems to just give benefit to the RDE method since it can benefit $N$ times as much from resetting. * Without stating how hyperparameters were set of the methods, it seems like an arbitrary comparison of the methods. * N is not really ablated and it's not clear how the ensemble increases computational overhead. The description of an MDP is wrong. An MDP is an abstract representation of an environment. The MDP does not consist of an agent. The bounds for the discount factor in line 74 should be $\gamma \in [0,1]$ not $\gamma \in [0,1)$. It is totally valid to have undiscounted cases. Overall, the work would need significant rewriting and an overhaul for the experiments for me to accept it. I am very doubtful that this can be done in the time-frame of a rebuttal. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How would PBT style methods compare to resetting strategies. Are they doing some form of resetting? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations of the method have not been discussed. An obvious limitation is that ensembles likely will increase the computational overhead. For example, in line 131 it is said that the simple resetting strategy often requires high replay ration and therefor more resources. This should likely be worse for the presented method and the conducted experiments are not convincing enough to show that the novel method would require fewer resources. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Regarding "Presentation"** - The subsection on 'Off-policy RL' is necessary in our work for two reasons. Firstly, the reset methods depend on an off-policy algorithm, as a recently reinitialized RL agent needs training using experiences generated by the previous RL agent. Secondly, this section allows us to introduce the base algorithms used in our experiments (DQN and SAC) and essential definitions for our work, such as the value function, which plays a key role in the adaptive integration mechanism (Equation 2). - In the 'Safe RL' subsection of Preliminaries, we introduce the definition of safe RL and provide an example of a safe RL algorithm (WCSAC). In contrast, Section 5 explores a specific challenge that arises when applying the reset method to safe RL domains: the rapid increase in safety costs. We then describe how we incorporate the proposed method into WCSAC to tackle this challenge and present the corresponding results. Thus, we believe both sections are necessary to establish the background and motivation of our work. - We agree that there is overlap between the content of "Primacy Bias and Resetting Deep RL Agent" and the Introduction. We intend to streamline and simplify the duplicated portions within "Primacy Bias and Resetting Deep RL Agent" for the final manuscript. - We plan to relocate Algorithm 1 to the Appendix and offer a comprehensive explanation of it in the final manuscript. Furthermore, we will adjust the reset frequency in Algorithm 1 to $T_{reset}/N$. - We apologize for the confusion regarding Fig. 2. The y-axis of Fig. 2 represents the IQM metric of test return. In Fig. 2 (a) and (b), we normalized the test return using the base algorithm with a replay ratio of 1, while in Fig. 2 (c), we utilized the unnormalized test return (which is why the "base" baseline performance is below 1). To ensure consistency, we will make the necessary revision to display the normalized value in the final manuscript. - The value of $4\times 10^5$ is one example. **Regarding "Prevention of performance collapse"** - As discussed in Section 4.3, we can address the degree of performance collapses by adjusting $\beta$. In the rebuttal PDF file, we have incorporated a result from RDE using a higher $\beta$ on the Humanoid-run environment. The result, illustrated in Figure 4 of the rebuttal PDF file, shows that RDE with $\beta=300$ avoids performance collapse, affirming the effectiveness of our method. In addition, RDE with $\beta=50$ still performs better than the base algorithm even if performance collapses exist. **Regarding "Experiments and Hyperparameters"** - The hyperparameters have been tuned within the range of values used in prior work [2-1]. Furthermore, to ensure fair comparison, we established the hyperparameters to be consistent; for example, both SR and RDE employ the same number of reset operations per agent (see common response). Regarding the reset interval, we have included a performance comparison by adjusting it, as mentioned in the common response for the reviewer's information. The range of hyperparameters will be detailed in the final manuscript. - We have mentioned the value of $N$ in the Appendix; however, we will relocate it to the main body. For Atari-100k and DMC, we used $N=2$, while for Minigrid, we used $N=4$. Notably, we conducted an ablation study on the number of ensemble agents, $N$, as outlined in "Ensemble and Reset Effect" in Section 4.3. During the rebuttal period, we also included the additional ablation study on $N$, and the corresponding results are shown in Figure 3 of the rebuttal PDF file. **Regarding "Reset frequency"** - We have discussed the concept of fair comparison related to reset frequency (interval) in the common response. To summarize, the reset operation of an individual agent in RDE should be the same with that of the SR agent, considering that the recovery times for DNNs of the same size tend to be similar. Notably, we noticed a decrease in performance when the reset frequency for an SR agent corresponds to the frequency at which ensemble agents are reset. Please see the common response and Figure 2 in the rebuttal PDF file (the green line represents SR with the same number of reset operations as ensemble agents). **Regarding "PBT style method"** - Population-Based Training (PBT) involves training networks in parallel with diverse parameters and hyper-parameters. It periodically evaluates performance, selects the best hyper-parameters, and distributes them to other learners for training. While both PBT and our method involve multiple networks, it's important to note that PBT is not directly related to reset methods. Reset methods, on the other hand, focus on reinitializing the parameters of a deep neural network to avoid overfitting to early experiences and facilitate convergence towards the global maximum (or minimum). **Regarding "Using the oldest Q-function in adaptive composition"** - The rationale behind utilizing the oldest Q-function is that the estimated Q-value function of a recently reset network can be unreliable due to limited time for recover its performance. We have conducted an ablation study regarding this, and the corresponding result is shown in Figure 5 of the rebuttal PDF file in the common response. The result indicates that utilizing the recently reinitialized (newest) Q-function for adaptive composition yields poorer performance due to inaccurate cumulative return estimation. The first, second, and third oldest Q-functions exhibit similar performance since they all recover their performance to approximate the cumulative return. From a conservative standpoint, we believe employing the oldest Q-function is the most suitable choice for our method. [2-1] E. Nikishin et al., "The primacy bias in deep reinforcement learning," ICML 2022 --- Rebuttal Comment 1.1: Title: Rebuttal Reponse (Increased score) Comment: Thank you very much for the detailed response and the many additional experiments. My concerns and issues have been mostly addressed and I am much more positive towards to presented work. I increase my score form 3 to 5. You stated in the rebuttal that you tuned the hyperparameters. How was this done? A grid search? Random search? How much tuning is required to get a good performance of the method? Did you use the same tuning budget for all methods? I am still concerned that the work requires substantial rewriting which might warrant a resubmission. Since other reviewers have not raised that point though I will discuss that in the reviewer discussion. --- Reply to Comment 1.1.1: Title: Response to Reviewer UDiR Comment: We thank the reviewer for the positive consideration of our work. - We conducted grid-based hyperparameter tuning, focusing on a range of values employed in previous research. This stems from the fact that our method is an extension of prior work [2-1], designed to prevent performance collapse and harness ensemble gains. Particularly in the environments utilized in the prior research, such as DMC, we initiated our search for appropriate values based on the parameters used in that previous work. For instance, in [2-1], a value of $2 \times 10^4$ was employed for $T_{reset}$ in the DMC environment. Consequently, we fine-tuned the values to ${1 \times 10^5, 2 \times 10^5, 4 \times 10^5}$. - Additionally, in environment such the MiniGrid environment, which had not been explored in prior studies, we performed grid searches for hyperparameter tuning. For example, in the case of tuning $T_{reset}$ for the Minigrid environment, we considered values within the set ${2.5\times 10^4, 5\times 10^4, 1\times 10^5, 2 \times 10^5 }$. Once we determine the appropriate recovery time for DNNs, achieving strong performance becomes more manageable. In addition, as mentioned in the common response, we set the hyperparamter considering the fairness. - Note that we used the same budget for the common hyperparameters tuning. - As stated in the previous comment of the rebuttal, we will enhance our presentation in accordance with the reviewers' feedback. We are confident that the final manuscript will be well-structured and comprehensive.
Summary: This paper proposes a novel reset-based method that leverages deep ensemble learning to address the primacy bias issue in deep reinforcement learning. The authors construct N-ensemble agents and reset each ensemble agent sequentially to prevent performance collapses and improve sample efficiency. The proposed method is evaluated through experiments on various environments, including safe RL scenarios, and the results demonstrate its effectiveness and potential for real-world applications. Strengths: 1. The paper addresses an important issue in deep reinforcement learning, namely primacy bias, and proposes a novel method to mitigate its effects. This is valuable as primacy bias can lead to overfitting and performance deterioration, which affects the applicability and efficiency of deep RL algorithms. 2. The use of deep ensemble learning in the proposed method is innovative and practical. Deep ensemble learning has shown effectiveness in domains such as image classification and RL, and leveraging the diversity gain of ensemble agents can enhance performance and prevent performance collapses. 3. The paper provides a comprehensive analysis of the proposed method, including its underlying operations and how it effectively prevents performance collapses. This analysis adds clarity and depth to the understanding of the proposed method. Weaknesses: 1. The paper stated that RDE was conducted for tasks with safety requirements, but it only did one experiment on safe RL benchmark. Thus the paper lacks effective validations for RDE on violations. Simply stating that RDE does not cause performance collapse in general continuous control tasks cannot explain its role in safe RL. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you show more experimental results on safe RL benchmark? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper stated that RDE was conducted for tasks with safety requirements, but it only did one experiment on safe RL benchmark. The paper has no negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Regarding "Results on safe RL benchmark"** - As described in the common response, we have conducted two additional experiments on the safe RL benchmark to show the effectiveness of the proposed method. The corresponding results are presented in Figure 1 of the rebuttal PDF file. These results clearly indicate that RDE outperforms WCSAC in both test performance and cost. As demonstrated earlier in the main paper, RDE addresses a critical issue of the naive reset method in the safe RL domain—the rapid increase in safety costs during training. Summarizing the aforementioned additional results along with the result on safe RL domain in the main paper, it becomes evident that the proposed RDE not only ensures safety but also enhances sample efficiency. --- Rebuttal 2: Title: An Invitation for Further Discussion Comment: As the discussion stage is drawing to a close, we are eagerly awaiting your comments and suggestions. We believe that our responses, along with the additional experiments conducted during the rebuttal period, have effectively addressed your concern—additional experimental results on the safe RL benchmark. If you have any remaining concerns, we would greatly appreciate the opportunity to engage in a productive conversation with you. Thank you for your valuable time and feedback.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable comments. In this paper, we propose a novel method that incorporates ensemble learning into the resetting method to harness diversity gain and mitigate performance collapse. We provide various experiments on both standard and safety RL benchmarks, as well as ablation studies that demonstrate how the proposed method mitigates performance collapses. We believe the proposed method can contribute to practical RL algorithms by addressing sample efficiency and safety, which are critical challenges in RL. In response to the reviewers' comments, we conducted additional experiments and included the corresponding results in the rebuttal PDF file. Referring to the PDF file, we present our common response to the major concerns raised by the reviewers below: **1. Two Additional Experiments on Safe RL Benchmark** In addition to the results on Safexp-PointGoal1-v0, we have included results on the SafexpPointButton1-v0 and SafexpCarGoal1-v0 environments. The results indicate that RDE consistently outperforms WCSAC in terms of both test performance and cost across all considered environments, as shown in Figure 1 of the rebuttal PDF file. As RDE improves final performance while minimizing safety cost regret, we believe that RDE can significantly contribute to the safe RL domain. **2. Further Experiments Regarding Reset Interval** - The reset interval is an important hyperparameter in reset-based algorithms. We have conducted performance comparisons by varying the reset interval, and the corresponding results are as follows: | RR |\| | 1 | | \| | 2 | | \| | 4 | | | :------------ | :------------ | :-------------: | :-------------: | :------------ | :-------------: | :------------: | :------------ | :------------: | :-------------: | | $ T_{reset} $ | \| | $ 2 \times 10^5 $ | $ 4 \times 10^5$ | \| | $ 2 \times 10^5 $ | $ 4 \times 10^5 $ | \| | $ 2 \times 10^5 $ | $ 4 \times 10^5 $ | | **SR+SAC** | \| | 1.08 | 1.02 | \| | 1.15 | 1.15 | \| | 1.13 | 1.21 | | **RDE+SAC** | \| | 1.10 | 1.20 | \| | 1.10 | 1.17 | \| | 1.16 | 1.25 | It is seen that RDE outperforms the vanilla reset method. - To ensure fair comparison, we set the reset interval of one ensemble agent to match that of an SR agent. This is important because excessively frequent resetting can have a negative impact on the learning process, causing reset operations to occur before DNNs have fully recovered their performance. To illustrate this, we've included additional experiments using the reset interval of $T_{reset}^{rr=1}/(N\times rr)$ for SR (vanilla reset) in DMC and Minigrid environments. The corresponding results are shown in Figure 2 of the rebuttal PDF file (indicated by the green line). Notably, the highly frequent vanilla reset (with the same number of reset operations as in RDE) performs worse, even more poorly than the base algorithm in the Minigrid environment. **3. Further Experiments Regarding # of Ensemble Agents (N)** We have included additional experiments to confirm the effectiveness of ensemble learning. Increasing the number of ensemble agents, denoted as $N$, can lead to greater diversity gain. Additionally, the presence of $N-1$ non-reset agents can aid in effectively mitigating performance collapses after resetting. Illustrated in Figure 3 of the rebuttal PDF file, RDE with $N=4$ demonstrates improved sample efficiency in both Atari-100k and the Minigrid environment. It is also observed in the Minigrid environment that the proposed method more effectively prevents performance collapse in comparison to RDE with $N=2$. **4. Further Experiments Regarding Performance Collapse** We have presented a result from RDE with a higher $\beta$ on the humanoid-run environment to determine whether RDE can completely prevent performance collapse. The corresponding result is depicted in Figure 4 of the rebuttal PDF file. It is observed that RDE with $\beta=300$ does not experience any performance collapse. **5. Further Experiments Regarding Adaptive Composition of Ensemble Agents** We outlined the reasoning for employing the oldest Q-function in the main paper, considering that the recently reinitialized Q-function may not provide accurate estimations. To demonstrate this, we conducted an ablation study using alternative Q-functions, such as the 2nd oldest, 3rd oldest, and newest Q-functions. Figure 5 of the rebuttal PDF file illustrates that utilizing the newest Q-function leads to notably worse performance. From a cautious perspective, using the oldest Q-function for the adaptive composition is the most appropriate decision for our approach. **6. Presentation** - We will enhance the presentation of our paper using the insightful comments provided by the reviewers. For instance, in response to Reviewer UDiR's feedback, we plan to relocate Algorithm 1 to the Appendix and provide a comprehensive explanation of its details to address the concern that Algorithm 1's content may appear redundant with the information presented in Fig. 1. Furthermore, we intend to refine the figures to offer a clearer representation. - The computational overhead almost linearly increases as $N$ increases, which is a limitation of our work, and we will include it in the final manuscript. Pdf: /pdf/9696f1ac25fadfdb8eaa9db4c2fa27b84d9c312e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation
Accept (poster)
Summary: This work aims to make RL robust to distribution shifts by limiting their reliance on spurious correlations between state features. This is achieved through a new RL algorithm designed in a variant of Robust MDPs extended to include a more structured uncertainty representation. Strengths: * This paper addresses a serious underlying flaw in the standard approaches for Robust MDPs, in that the uncertainty set is usually artificial an no alined to the ways we need our policies to be robust. Weaknesses: The language is often confusing, for instance the term "spurious correlations" is often used to refer specifically to spurious correlations between state variables. This is very confusing as it overloads the term, and makes it hard to refer to other spurious correlations (such as a spurious correlation between the agents actions and the reward or another agents actions). Better to say "Spurious state correlations". On a similar note "semantic uncertainty" is a confusing term that wasn't explained until deep into the paper. It sounds like uncertainty **about** semantics (which is how it has been used in the literature), but it is meant to be uncertainty that is not just a norm-ball perturbation over the transition function. A better term would be "structured" uncertainty. * Other more minor confusing wordings make the paper more difficult to read than it needs to be: * I don't know what it means for portions of the state to "not have causality" * I don't know what it means for an uncertainty set to be "shaped by the unobserved confounder and sequential structure of RL" * I don't know what it means for some approach to be "superior in breaking spurious correlations" * There is a typo in : "Despite various types of uncertainty have been investigated in RL" In addition it's unclear how sensitive the approach is to the degree of robustness. It's a famous issue of RMDPs that the degree of robustness has to be carefully selected, too large and you have an overly conservative policy, and too small you get no robustness. Giving that part of the argument for this approach is alleviating that problem (as shown in Figure 3), a sensitivity analysis here would greatly improve the paper, as a main draw away from RMDPs is that they are very sensitive to the robustness parameter. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How sensitive is the approach to changes in B% and K? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: It is important to discuss somewhere how hyper-parameters (such as k and B%) were chosen, and could be chosen in a new domain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for valuable suggestions and the recognition of our proposed problem setting. In what follows, we provide our response to the reviewer's comments. ### **Q1. Change the term "spurious correlation" to "spurious state correlation".** Thanks for raising this point. We agree that "spurious correlation" is generally used between different variables of interest and may lead to confusion without further specification. So we shall explicitly introduce that this work focuses on the spurious correlation between state and change "spurious correlation" to "spurious state correlation" when it is better. ### **Q2: Change the term "semantic uncertainty" to "structured uncertainty".** Thanks for the valuable suggestion. We agree that the terminology "structured uncertainty" is better to describe our setting, where the uncertainty set is not a norm ball using some divergence function but a possible heterogeneous ball with a causal structure. ### **Q3: Study of the uncertainty level/degree sensitivity ($\beta\%$ and K).** Thanks for raising this question. Different from the traditional RMDPs that use a radius parameter $\sigma$ to control the size of the uncertainty ball, our RSC-MDP characterizes the uncertainty set by constructing samples inside it by perturbing the confounder over state space. To control the degree of the perturbation and hence implicitly control the uncertainty size, we introduce two hyper-parameters $\beta \%$ and $K$. We agree with the reviewer that the sensitivity analysis of these two parameters is critical so we conduct additional experiments and show the results in **General Response (2)** (two tables). The results provide three important findings: * **Our RSC-SAC is not sensitive to $\beta$.** Shown in the first table, the proposed RSC-SAC can perform well in both nominal and shifted settings --- keeping good performance in the nominal setting and achieving robustness, for a large range of $\beta\%$ (10%-80%). It verifies that RSC-SAC is not sensitive to hyperparameter choices. * **Our RSC-SAC is not sensitive to $K$.** We evaluate the proposed RSC-SAC using different $K = [32,64,\cdots, 1024]$ and achieve similar results, shown in the second table. It shows that RSC-SAC is not sensitive to the size $K$ of candidate samples for permutation. * **Performance-robustness tradeoff.** From the first table, when the ratio of perturbed data $\beta\%$ is very small (1%), RSC-SAC almost achieves the same results as vanilla SAC in nominal settings and there is no robustness in shifted settings. As $\beta\%$ increases (considering more robustness), the performance of RSC-SAC in the nominal setting gradually gets worse, while reversely gets better in the shifted settings (more robust). However, when the ratio is too large (>80%), the performances of RSC-SAC in both settings degrade a lot, since the policy is too conservative so that fails in all environments. ### **Q4: Other comments about writing.** Thanks for the careful reading and valuable suggestions about the writing. We address them below: * "...different portions of the state that do not have causality..." -> "different portions of the state that don't have correlations induced by unobserved confounder." * "an uncertainty set shaped by the unobserved confounder and sequential structure of RL." -> "an uncertainty set determined by some causal structure and unobserved confounder" * "an approach that is superiority in breaking spurious correlations..." -> "an approach that can achieve robust performance by avoiding learning useless spurious correlations" * "Despite various types of uncertainty have been investigated in RL,..." -> "Despite various types of uncertainty that have been investigated in RL,..." --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the clarifications. Your comments both here and to the other reviews largely address my concerns, and I will be increasing my score accordingly. --- Rebuttal 2: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors
Summary: The paper proposes a state-confounded (SC-) and a robust-state confounded (RSC-) MDP formulation to account for setups where a confounder satisfying the backdoor criterion confounds the states. The RSC-MDP setup assumes that the confounder lies in an uncertainty set which is part of the MDP parameters, and the aim is to learn a value function that is the lower bound of the different value functions arising out of different instances of the RSC-MDP at different values of the confounder within its uncertainty set. They propose an empirical algorithm to approximate the effect of perturbing the confounder by perturbing single dimensions of the states and setting them to values present in ‘otherwise nearby’ states (defined by distance between the two states in dimensions other than the dimension being edited). The paper claims that this indirectly estimates the effects of perturbing the confounder. They learn a graph-based model that is supposed to be a causal model, to generate new transitions (given a state and action s_t, a_t, generate s_t+1, r_t) based on the perturbed states and (original) action pairs. They augment SAC by randomly replacing part of the sampled batches with the generated transitions and learn as usual, and show that this recipe is robust to the spurious correlation in the nominal envs they define. All experiments are in the state space (non-vision inputs). Strengths: 1. The paper studies an important problem and proposes a set of benchmarks with handcrafted spurious correlations in a CARLA environment and in a robosuite environment which could be useful for more future works to study spurious correlations in RL. 2. The RSC-MDP formulation could also be useful, although I’m not sure how much the notation and proofs scale to an arbitrary number of confounding variables. Weaknesses: 1. The paper structure is somewhat confusing to me: The MDP formulations and the empirical algorithm don’t seem to have much of a connection and seems like both were developed independently - please correct me if I’m missing something here. It would be great to further clarify exactly how the algorithm is helping to solve the robust SC-MDP. 2. The notion of semantic uncertainty is not very precise or clear (I don’t think there’s any references either), and could be removed in my opinion since the RSC-MDP formulation can simply use the “uncertainty set” phrasing. 3. The appendix suggests that the training environments have perfect spurious correlations: the correlation isn’t broken in even a few instances of the training environments, so for eg. there would never be a (non-generated) transition in the SAC buffer that would have the brightness value low in its state space value, in the case that the training envs are following the correlation regime where brightness and traffic are both high during the day. It’s unclear to me then, why the causal model would ever generate a state value that doesn’t also have the brightness value set to high (same logic for the perturbation procedure). Am I missing something else here? Or are the training envs set up such that an agent will train on two kinds of envs at once - night time with less traffic and day time with more traffic, and so it s possible to see samples with the brightness value set to both low and high across otherwise nearby states? It would be great to clarify what exactly is the train and test env distribution and what is expected of the graphical causal model - is it to generate counterfactual transitions (counterfactual to what?) and conclusively show that it is actually doing that by visualizing or plotting some property of the samples it is generating. It's possible that I have misinterpreted something here, and I'm willing to raise my score if the authors' response clarifies some of these questions. 4. I would also suggest mentioning relevant related work explicitly aimed at resolving causal confusion in online [1] and offline RL [2] (and comparing to the closest version if applicable). 5. It would be great to include the training curves for the results in Table 1 as well, since that gives insight into whether other methods simply converge later to a similar highest reward, or whether they are entirely limited in their ability to converge to a similarly high reward as the best performing method. [1] Resolving Causal Confusion in Reinforcement Learning via Robust Exploration . Clare Lyle, Amy Zhang, Minqi Jiang, Joelle Pineau, Yarin Gal. ICLR Self-Supervised RL Workshop 2021. [2] Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal, NeurIPS Offline RL Workshop 2022, CleaR 2023 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How is the experiment to answer R3 designed? It's unclear how to test the w/o P^{c} case since the confounder isn't known anyway? 2. As stated previously, it would help to get more insights into how novel the generated transitions from the causal model really are. 3. Have you observed or quantified the robustness-performance tradeoff mentioned in line 263, which will lead to a performance drop in some envs? I expect the perturbation procedure of states to only work for specific kinds of envs, so it would be good to show the failure cases as well. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Given how heuristic the technique of generating perturbed states is (swapping single dimensions between otherwise nearby states), the paper should properly discuss the many challenges of scaling this approach to a high-dimensional state space like that in visual domains. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for recognizing our contributions to problem formulation and the creation of a useful benchmark! We provide our response below: ### **Q1: The connection between the proposed problem formulation (robust SC-MDP) and the empirical algorithm.** The empirical algorithm implicitly constructs and manipulates the unknown uncertainty set proposed in the robust SC-MDP formulation. To solve the RSC-MDP problem, we need to optimize over an unknown transition kernel uncertainty set -- structured ball around some nominal confounder value. Due to lacking the information on both the structure and nominal value, the empirical algorithm approximates the uncertainty set by constructing samples within it -- perturbing the states to mimic different confounder values around the nominal value. ### **Q2: The notion of semantic uncertainty.** Thanks for the valuable suggestion. We agree that the term "semantic uncertainty" may cause confusion without further explanation and references. We replace it with "structured uncertainty" since the uncertainty set is determined by some underlying task-specific causal structure. ### **Q3: What exactly is the train and test env distribution? Show that our approach is actually generating desired samples by visualizing.** We use Brightness to illustrate the train and test envs. We assume the latent confounder has 4 values: $z=0$ (generate day-heavy samples), $z=1$ (generate night-light samples), $z=2$ (generate day-light samples), and $z=3$ (generate night-heavy samples). The training environment is generated with $z=0,1$ and the testing environment is generated with $z=2,3$. Therefore, the shift between training and testing comes from different compositions of brightness and traffic. **Since we cannot explicitly set $z=2,3$ in training, our perturbation method simulates the effect of setting $z=2,3$ by perturbing the training data (with $z=0,1$).** **We visualize the generated trajectories of Lift in General Response (3) (includes figures and detailed explanation) with comparisons to original trajectories in training env, showing that our perturbation method can generate counterfactual examples to the unobserved confounder.** ### **Q4: Mention the related works [1][2] explicitly.** Thanks for providing important related works [1][2]. We add the following discussion to our related work: > [1] designs an exploration algorithm to conduct intervention and improve state-action coverage to avoid biased data collection. However, our work deal with a more general setting, where the testing environment could vary from the training one, leaving the exploration strategy inapplicable. > [2] proposes an uncertainty-based acquisition function to resample from the data buffer, learning more from the samples that do not have spurious correlations. This method does not explicitly handle spuriousness and could fail when spurious correlations exist in most samples. **We add [2] as a baseline and show the results in General Response (1)**. [1] Resolving Causal Confusion in Reinforcement Learning via Robust Exploration. Lyle et.al. ICLR Workshop 2021 [2] Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gupta et.al. CleaR 2023 ### **Q5: The robustness-performance tradeoff.** The robustness-performance tradeoff is mainly determined by $\beta\%$, the ratio of the perturbed data. To investigate more on this tradeoff, we add an ablation study on $\beta\%$ and show the results in **General Response (2)**. Two important messages: * **The existence of the tradeoff.** When the ratio of perturbed data $\beta\%$ is very small (1%), RSC-SAC achieves similar results as vanilla SAC in nominal settings and shows no robustness in shifted settings. As $\beta\%$ increases (considering more robustness), the performance of RSC-SAC in the nominal setting gets worse, while reversely the performance gets better in the shifted settings (more robust). However, when the ratio is too large (>80%), the performances of RSC-SAC in both settings degrade a lot, since the policy is too conservative. * **Our RSC-SAC maintains both robustness and good performance.** Although the tradeoff exists, the proposed RSC-SAC can perform well in both nominal and shifted settings for a large range of $\beta\%$ (10%-80%) -- keeping good performance in the nominal setting and achieving robustness. ### **Q6: How are the experiments designed to answer R3? How to test the w/o $P^{c}$ case since the confounder is unknown?** * **How to design 3 ablation studies in R3.** All 3 ablation methods are modified on RSC-SAC. For "w/o $\textbf{G}\_\phi$", we use a full graph to replace the learnable causal graph during training. For "w/o $P_{\theta}$", we replace the entire causal model with a fully-connected NN. For "w/o $P^c$", we don't do the perturbation of state introduced in Section 4.1. * **The experiments of w/o $P^{c}$.** We denote the unknown confounder distribution as $P^{c}$. "w/o $P^{c}$" means that we don't permute the dimensions of states in Equation 6 in the data generation process. So the generated data will still be in the same distribution as the nominal training environment. We change it to "w/o $P^{c}$ perturbation" in Table 4 to avoid confusion. ### **Q7: Discussion about scaling our approach to high-dimensional state spaces such as visual domains.** We add more discussion in the conclusion section. Please check **General Response (4)**. ### **Q8: Adding the training curves for the results in Table 1.** The training curve is shown in the bottom row of Figure 5, where we display the testing reward on the shifted testing environment with the training step increases. After a long training process, the proposed method RSC-SAC still outperforms all baselines. To verify that all methods are converged (enter a flat area), we also plot the testing reward on the nominal environment during training in the top row of Figure 5. --- Rebuttal 2: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors --- Rebuttal Comment 2.1: Title: Thanks for your response and new results Comment: Dear Authors, Thankyou for adding the dynamics visualization as well as another baseline comparing to the proposed method. I think the comparison helps to highlight how the proposed method tackles a problem that is not addressed by the baseline - that of compositional generalization (please correct me if I'm conflating different things). I had the following question before I can update my score: Is the following correct? To generate a training example for the causal graph model the following procedure will be followed: We will take a transition (presumably from a daylight driving scenario) with day-light=high, and traffic=heavy, (and the other dimensions being set to some values), and assuming that we decide to perturb the day-light dimension of the current state, we look for states in the buffer where the day-light value is very different, however the other values are similar (therefore likely traffic=heavy, and day-light=low, and other dimensions being similarly valued). Now the s_{t+1} that is used to supervise the causal model output is still the same as that in the original transition (presumably day-light=high and traffic=heavy). I'm confused why the causal model should be expected to predict that a day-light heavy state transitions to a state that is day-light low -- unless the causal graph completely ignored the day-light dimension entirely. Getting the model to ignore the day-light dimension entirely is what I assume is being enforced by the sparsity loss - which leads me to the hypothesis that essentially what the causal graph is doing is helping to enforce the principle of "looking or depending on as few input dimensions as possible". This is important to note, as there is prior work [1] testing this idea for imitation learning by using some sort of dropping out of the input representation of a policy, (they don't make the assumption of operating in the state space). This also suggests that a similar trick might work if for example you removed the causal graph and perturbation mechanism entirely and trained a SAC policy with a input dropout mechanism on the state space inputs since it might enforce the same invariance to irrelevant dimensions. I think comparing the proposed method to this simple baseline will greatly help to refute this claim, as well as further explain what benefit the causal model is really bringing. I do think the proposed method has potential merit, but given the complexity and added assumptions of the additional components in the proposed method, I would like to be sure by dissecting rigorously where the gains are coming from. Thanks for your responses so far, and looking forward to engaging further. [1] Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu --- Reply to Comment 2.1.1: Title: Response to follow-up questions Comment: Thank you for engaging in the discussion and providing insightful feedback! We provide new experiments as well as analyses to answer your questions. ### **Q1: The proposed algorithm (RSC-SAC) tackles compositional generalization** Yes, we totally agree that there is a strong connection between compositional generalization and breaking spurious correlation. Actually, In section 5.1, we propose two kinds of environment settings of spurious correlation: 1. **Distraction correlation**: Between task-relevant and task-irrelevant features. The task-irrelevant feature is a distractor and should be ignored by the policy (random dropping may solve this); 2. **Composition correlation**: Between two task-relevant features. This exactly describes the compositional generalization setting, where the testing environments contain new combinations of task-relevant features (random dropping cannot help). ### **Q2: Correctness of the example of daylight driving scenario** The reviewer accurately addresses most aspects of the example, with the exception of the last sentence. After having the perturbed $\{s_t, a_t\}$ (e.g., traffic=heavy, day-light=low), we infer $s_{t+1}$ from our causal model with counterfactual generation. We expect a new $s_{t+1}$ by imagining a different value in $s_t$. In experiments, we observe that most $s_{t+1}$ are different from the original one and have traffic=heavy and day-light=low. ### **Q3: Adding experiments -- prior work OREO [1] as a baseline** We evaluate the performance of OREO [1] with different ratios of dropping ($\alpha$) on the Brightness (Distraction) and Behavior (Composition) environments. The results are shown below: |Method|Ours|OREO ($\alpha$=0.1)|OREO ($\alpha$=0.2)|OREO ($\alpha$=0.3)|OREO ($\alpha$=0.4)|OREO ($\alpha$=0.5)| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Brightness (nominal)|0.92±0.31|**0.973 ± 0.239**| 0.739 ± 0.327| 0.371 ± 0.229| 0.26 ± 0.243| 0.207 ± 0.197| |Brightness (shifted)|**0.99±0.11**|0.891 ± 0.153| 0.562 ± 0.18| 0.256 ± 0.087| 0.182 ± 0.087| 0.128 ± 0.059| |Behavior (nominal)|**1.06±0.07**| 1.04 ± 0.104| 0.989 ± 0.122| 0.855 ± 0.259| 0.715 ± 0.282| 0.482 ± 0.224| |Behavior (shifted)|**1.02±0.09**| 0.517 ± 0.208| 0.541 ± 0.121| 0.553 ± 0.107| 0.509 ± 0.169| 0.366 ± 0.137| 1. Our method outperforms OREO in both Brightness (shifted) and Behavior (shifted) environments. We find that OREO indeed achieves robustness (but still worse than ours) in Brightness, which only contains the distraction correlation. 2. The advantage of our method (swap values of some dimensions of states) over OREO (drop some dimensions of states) could be explained from two aspects: * **Our method has compositional generalization, while OREO does not.** Swapping dimensions within the state creates new compositions of features that can address both compositional generalization and distraction issues. However, dropping information (OREO) can only deal with the distraction issue but does not make policy generalizable to unseen feature combinations. * **Dropping dimension of states loses information.** [1] focuses on image input which has a spatial structure -- still contains enough information after dropping some dimensions. However, in our state input setting (each dimension has semantic meaning), dropping dimensions of the state could cause severe information loss. The evidence is that increasing the ratio of dropping dramatically degrades performance. [1] Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Summary: This paper aims to address the spurious correlation challenge that arises in RL. Such correlation is typically useless to decision-making but may be learned by agents, which leads to failure in applying to unknown test cases. To this end, the authors proposed a novel RSC-MDP framework that models the spurious correlation challenge with an unobserved confounder. The authors then justify that previous robust MDP methods may fail on the RSC-MDP framework due to inaccurate uncertainty modeling. The authors proposed a heuristic method to handle the RSC-MDP challenge authors then conducted extensive experiments to compare the proposed method and SOTA baselines for robust RL. Experiments justified that the proposed method has better performance for the proposed spurious correlation attacks. Strengths: 1. The paper is very well written, with a clear explanation of motivation and well organized problem setup and experimental design. 2. The authors provide theoretical justification that the proposed robustness is milder and more accurate than the traditional robust MDP. 3. The authors provide novel experimental design to assess the effect of spurious correlation in the state space of RL. The proposed baseline has the potential to be adopted by future study. Weaknesses: 1. In the proposed driving example, the confounder issue spans over multiple steps of interactions, whereas the latent variable c introduced in RSC-MDP only affects one step, and each step is affected independently (i.e., $c_t$ is independently generated for all $t$). In reality, the brightness and traffic density do not normally get perturbed per each step of the transition. Do the proposed RSC-MDPs cover the targeted spurious correlation problem? 2. Seems that the confounding within stat is not affecting the distribution of transition; that said, the observation still follows the distribution of trajectories after marginalizing the confounder $c$. To me, the spurious correlation challenge of RSC-MDPs is more of a robustness challenge than a confounding challenge to me, where the offline trajectory distribution is typically different from the interventional transition dynamics. How does the spurious correlation within each state affect the learning of transition P(s_{t+1} | s_t, a_t), hence affecting regular RL algorithms? 3. Could authors provide more reasoning on the proposed heuristic perturbation in approximating the true underlying perturbation of latent variables? Intuitively it seems to align well with the motivative driving example, but it seems that the perturbation proposed in equation (6) introduces extra correlations within states. In particular, sample $s^i_k$ selected to replace $s^i_t$ is based on a rule that explicitly involves $s^j_t$ for $j\neq i$. Do we worry that such introduced correlation ruins training for similar reasons of the targeted spurious correlation? Misc: line 81: discounted infinite-horizon -> finite horizon? line 220: $\sigma_2 \in (3/4, 1]$ according to the proof. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for valuable suggestions and the praise of our systematic formulation and experiments. In what follows, we provide our response to the reviewer's comments. ### **Q1: Does the proposed RSC-MDP formulation cover the targeted spurious correlation problems in the real world and our experiments?** Thanks for pointing out this important question! **Our RSC-MDP is a general formulation designed to cover most types of unobserved confounders.** Note that the proposed RSC-MDP allows but not requires the confounder variable to vary across different time steps, i.e., $c_t$ can have different values for different $t$ but not forced to. So it includes the case that the confounder variable keeps the same in the entire horizon, i.e., $c_1 = c_2 = \cdots c_T$. In addition, RSC-MDP can deal with other scenarios when the confounder variable $c_t$ changes over time, e.g., the car is driving through sudden weather changes. ### **Q2: How does the spurious correlation within each state affect the learning of transition $P(s_{t+1} |s_t, a_t)$, hence affecting regular RL algorithms?** Thanks for raising this insightful question. Recall the transition kernel for the proposed state-confounded MDP (SC-MDP) -- $s_{t+1} \sim \mathcal{P}\_t(\cdot | s_t, a_t, c_{t})$. The spurious correlation -- represented by an unknown confounder distribution $P_t^c$ (i.e., the confounder $c_t \sim P_t^c$) will determine the expected transition kernel $\sum_{c_t} \mathcal{P}\_t(\cdot | s_t, a_t, c_{t}) P_t^c(c_t)$. So the expected transition kernel will change if $P_t^c$ changes. The reviewer is correct that if the distribution $P_t^c$ is fixed, the influence of confounder on the transition kernel will not exist after marginalizing w.r.t. $c_t$. However, in this work, we desire to learn a policy that can address possible perturbation of the confounder distribution $P_t^c$ --- robust SC-MDP. In this case, the standard RL algorithm may learn the transition kernel based on one confounder distribution $P_t^c$ (spurious correlation) and fail catastrophically when $P_t^c$ varies in the testing environment. ### **Q3: More reasoning on the proposed heuristic perturbation for approximating the true underlying perturbation of the latent variables?** Thanks for raising this valuable question. We would like to answer this question from two aspects: * **Why our perturbation within the state approximates changing latent variables.** We provide more explanation by showing a concrete example, which visualizes the original state trajectories and the generated trajectories by our perturbation algorithm. Please refer to the figure in **General Response (3)**. In the nominal environment, the green cube is always initialized on the left part of the table and the red cube is initialized on the right part. We assume the latent variable $z$ is discrete and has 4 values: $z=0$ (generate green-left samples), $z=1$ (generate red-right samples), $z=2$ (generate green-right samples), $z=3$ (generate red-left samples). The nominal environment only contains cases with $z=0,1$, which has a strong spurious correlation between color and position (Figure (a)). Without explicitly setting $z=2,3$ (we can't do this during training to unobserved $z$), we directly perturb the states to mimic the effect of controlling $z$, which gives us the samples in Figure (c) that contains the cases with $z=2,3$ (green-right and red-left samples). * **Why perturbation within state works and doesn't involve additional spurious correlation.** The correlation between two dimensions in a state could be either causation or spurious correlation. To avoid the neural network overfits to harmful spurious correlation, we randomly perturb a small portion of the data ($\beta\%$) by swapping some dimensions of states to regularize the model. Since this is a random perturbation, it does not introduce additional spurious correlation. As we only perturb a small portion of the data, the model can still learn meaningful features from the remaining large portion of the data. ### **Q4: Other minor comments.** Thanks for improving the writing of our work. We have revised and polished the paper accordingly to the reviewer's comments. --- Rebuttal 2: Title: Thanks for your insightful suggestions! Comment: Dear reviewer, Thank you once again for investing your valuable time in providing feedback on our paper. Your insightful suggestions have led to significant improvements in our work, and we look forward to possibly receiving more feedback from you. Since the discussion period between the author and reviewer is rapidly approaching its end, we kindly request you to review our responses to ensure that we have addressed all of your concerns. Also, we remain eager to engage in further discussion about any additional questions you may have. Best, Authors
Summary: The paper studies how to develop more robust reinforcement learning algorithms when spurious correlation exists in the observation space. The paper studies the problem from a causal perspective and present robust state confounded MDP as the problem formulation. The paper proposes an algorithm to solve this problem formulation by learning the structural causal model. The paper demonstrate experimentally that the proposed method is more robust to shift in the environment compared to baselines. Strengths: The paper is mostly extremely well written and easy to follow. The structure of the paper makes sense, which first discusses the formulation of the problem in the form of robust state confounded MDP and comparison to existing formulation. The algorithm the paper proposes is intuitive and the improvements over the baseline in the experiment section are clear. Weaknesses: The discussion around related work focuses on a few most relevant papers and does not address the broader literature if I am not mistaken. The problem of spurious correlation is known within the community and various prior works [1, 2]. Please see [2] for example, which also learns the dynamics model to alleviate the issue of spurious correlation. These prior works tackle this problem even though they did not propose the same robust state confounded MDP that the paper proposes. I would have liked to see a more thorough comparison with the broader literature in a related work section. On page 3, the paper writes “people usually prescribe the uncertainty set of the transition kernel using a heuristic and simple function rho with a relatively small sigma”. However, because of the lack of a related work session, I do not see evidence for this statement. [1] https://ben-eysenbach.github.io/rpc/ [2] https://arxiv.org/abs/1909.11373 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see my questions about related work Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please see my questions about related work. I would have like to see the limitations that the method currently only work for state-based policies more prominently displayed and discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewer for their insightful feedback. We are glad to know that the reviewer recognizes the novelty of our contributions, the clarity of our problem formulation, and the empirical algorithm that sufficiently shows the advantages compared to baselines. We provide our response to the questions below. ### **Q1: Comparisons with broader literature such as [1][2].** Thanks for the important question and providing the important related works [1][2]. * We have included a thorough related work section in Appendix B in the original manuscript with comparisons to literature about 1) other related RL formulations, 2) robustness in RL investigations, and 3) spurious correlation in RL. Within Appendix B.3, we discussed different spurious correlation types that have been considered in the RL community. * We will definitely add [2] as a broader reference as the reviewer suggested since it also addresses spurious correlation in RL. We want to note that [2] is kind of far from our topic since 1) [2] sought to solve multi-task RL, while we focus on single-task RL; 2) the spurious correlation that [2] deals with is largely different from the one considered in this work: [2] considers the spurious correlation between the dataset distribution and the dataset identity to design a better task inference module, while the spurious correlation considered in this work is between different states. * After carefully reading [1], we found that it is not quite related to spurious correlation but is implicitly related to robustness in RL. So we will add it to the related section in Appendix B.2. [1] sought to learn a 'simple' policy by minimizing the used information to seek better performance in standard RL, which turns out to have some robustness benefits. ### **Q2: Related works about robustness in RL.** For the claim mentioned by the reviewer "people usually prescribe the uncertainty set of the transition kernel using a heuristic and simple function rho with a relatively small sigma" on page 3, we add references [3][4] to the end of the sentence to support. A more detailed review of the existing investigated uncertainty set can be referred to in Appendix B.2, where we thoroughly summarize that most existing works use task structure-agnostic and heuristic 'distance' such as KL divergence and total variation. ### **Q3: More discussion about the potential of the proposed method which currently only work for state-based policies.** We add more discussion in the last section: > The current method is of great potential to be applied to more complicated problems. In particular, the proposed method requires swapping the dimensions of states to break spurious correlation, where it implicitly assumes that each dimension has semantic meaning. In more complicated problems such as high-dimensional image states, each dimension becomes a pixel without semantic meaning anymore and the dimension size may explode. To extend our method to such cases, we can leverage existing state abstraction techniques [5,6] to project images to a low-dimensional latent space, where each dimension represents a semantic feature. Then, our method can be straightforwardly applied to the low-dimensional latent space. [1] Robust predictable control. Eysenbach, Ben, Russ R. Salakhutdinov, and Sergey Levine. Neurips 2021 [2] Multi-task batch reinforcement learning with metric learning. Li, Jiachen, Quan Vuong, Shuang Liu, Minghua Liu, Kamil Ciosek, Henrik Christensen, and Hao Su. Neurips 2021 [3] Toward theoretical understandings of robust Markov decision processes: Sample complexity and asymptotics. Yang, Wenhao, Liangyu Zhang, and Zhihua Zhang. The Annals of Statistics 50.6 (2022): 3223-3248. [4] Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity. Shi, Laixi, and Yuejie Chi. arXiv preprint arXiv:2208.05767 (2022). [5] Disentangling by factorising. Hyunjik Kim and Andriy Mnih. In International Conference on Machine Learning, pages 2649–2658. PMLR, 2018. [6] A theory of state abstraction for reinforcement learning. David Abel. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9876–9877, 2019. --- Rebuttal Comment 1.1: Title: Thank you for updating related works Comment: I am keeping my score as is, since I am not entirely convinced the related work section is well written just yet. For example, on page 16, there is this sentence "The proposed RSC-MDPs can be regarded as addressing the state uncertainty since the shift of the unobserved confounder leads to state perturbation. In contrast, RSC-MDPs consider the out-of-distribution of the real state that will directly influence the subsequent transition in the environment, but not the observation in POMDPs and SA-MDPs that will not directly influence the environment." I am quite confused what the "In contrast" refers to, since the previous sentence also discusses RSD-MDPs. --- Reply to Comment 1.1.1: Title: Thanks for engaging in discussion Comment: Thank you for engaging in discussion with us and pointing out this question. To directly answer the reviewer's question, 'in contrast' refers to other prior works that also address state uncertainty as our RSC-MDPs. As the reviewer suggested, we have revised the related work section. We hope the following sentences make sense to the reviewer. > The proposed RSC-MDPs can be regarded as addressing the state uncertainty since the shift of the unobserved confounder leads to state perturbation. **In contrast to prior works which also address state uncertainty**, RSC-MDPs consider **distribution shift** of the real state that will directly influence the subsequent transitions in the environment, ~~but not~~ instead of the observation in POMDPs and SA-MDPs that will not directly influence the environment **but implicitly influences the policy**."
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading of the paper and their insightful and valuable feedback. We provide new experimental results and discussions to answer some common questions raised by reviewers. ### **(1) Add a new baseline [1], which also tackles spurious correlation in RL.** |Env|Brightness|Behavior|Crossing|CarType|Lift|Stack|Wipe|Door| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |[1] (nominal)|**1.07±0.10**|1.00±0.02|**1.08±0.06**|**1.00±0.02**|**0.99±0.03**|0.90±0.12|**0.93±0.20**|**0.99±0.05**| |Ours(nominal)|0.92±0.31|**1.06±0.07**|0.96±0.03|0.96±0.03|0.96±0.05|**1.04±0.08**|0.92±0.14|0.98±0.05| |[1] (shifted)|0.47±0.14|0.83±0.09|0.14±0.03|0.77±0.14|0.35±0.09|0.24±0.12|0.17±0.17|0.05±0.02| |Ours(shifted)|**0.99±0.11**|**1.02±0.09**|**1.04±0.02**|**1.03±0.02**|**0.98±0.04**|**0.77±0.20**|**0.85±0.12**|**0.61±0.17**| The results indicate that [1] has very limited robustness in the shifted testing environment compared to our method, especially in Crossing, Stack, Wipe, and Door tasks. [1] Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal, NeurIPS Offline RL Workshop 2022, CleaR 2023 ### **(2) Add sensitivity analysis of the ratio of perturbed data $\beta\%$ and the number of perturbation candidates $K$.** |$\beta\%$|1%|10%|20%|30%|40%|50%|60%|70%|80%|90%|100%| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CarType (nominal)|0.998 ±0.018|0.989 ±0.028|0.987 ±0.020|0.979 ±0.030|0.974 ±0.032|0.984 ±0.027|0.966 ±0.029|0.965 ±0.032|0.952 ±0.035|0.914 ±0.053|0.854 ±0.096| |CarType (shifted)|0.654 ±0.210|0.826 ±0.127|0.977 ±0.051|1.003 ±0.044|0.995 ±0.042|1.012 ±0.035|1.014 ±0.028|1.017 ±0.028|1.001 ±0.039|0.905 ±0.145|0.825 ±0.172| |Crossing (nominal)|1.002 ±0.018|0.995 ±0.026|0.993 ±0.024|0.988 ±0.022|0.975 ±0.028|0.968 ±0.031|0.964 ±0.029|0.952 ±0.039|0.909 ±0.041|0.869 ±0.135|0.818 ±0.166| |Crossing (shifted)|0.675 ±0.120|0.990 ±0.051|1.012 ±0.043|1.031 ±0.028|1.029 ±0.032|1.019 ±0.018|1.025 ±0.028|1.012 ±0.027|0.977 ±0.039|0.915 ±0.140|0.859 ±0.147| |K|32|64|128|256|512|1024| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CarType (nominal)|0.972±0.022|0.967±0.029|0.978±0.022|0.975±0.023|0.978±0.026|0.967±0.027| |CarType (shifted)|1.009±0.037|1.009±0.032|1.020±0.031|1.014±0.033|1.005±0.027|1.009±0.034| |Crossing (nominal)|0.971±0.030|0.974±0.032|0.987±0.023|0.974±0.030|0.983±0.034|0.980±0.024| |Crossing (shifted)|1.036±0.022|1.040±0.021|1.041±0.021|1.039±0.017|1.050±0.027|1.039±0.019| The results demonstrate three important messages: * **Our RSC-SAC is not sensitive to $\beta$.** As shown in the first table, the proposed RSC-SAC performs well in both nominal and shifted settings --- keeping good performance in the nominal setting and achieving robustness, for a large range of $\beta\%$ (10%-80%). It verifies that RSC-SAC is not sensitive to hyperparameter choices. * **Our RSC-SAC is not sensitive to $K$.** As shown in the second table, we evaluate the proposed RSC-SAC using different $K = [32,64,\cdots, 1024]$ and achieve similar results. It shows that RSC-SAC is not sensitive to the size $K$ of candidate samples for permutation. * **Performance-robustness tradeoff.** In the first table, when the ratio of perturbed data $\beta\%$ is very small (1%), RSC-SAC almost achieves the same results as vanilla SAC in nominal settings and there is no robustness in shifted settings. As $\beta\%$ increases (considering more robustness), the performance of RSC-SAC in the nominal setting gradually gets worse, while reversely gets better in the shifted settings (more robust). However, when the ratio is too large (>80%), the performances of RSC-SAC in both settings degrade a lot, since the policy is too conservative so that fails in all environments. ### **(3) Add a visualization of the generated trajectory in Lift environment by our perturbation algorithm.** The figure is in the uploaded **PDF**. In the nominal (training) environment, the green cube is always initialized on the left part of the table and the red cube is initialized on the right part. In the shifted (testing) environment, the green cube is always initialized on the right part of the table and the red cube is initialized on the left part. Figure (a) shows the trajectory of the state in the nominal environment. If we don't do any perturbation of the state, the generated trajectories will still have a spurious correlation (Figure (b)). However, with our perturbation within the state, we generate trajectories that break the spurious correlation and blend the color (Figure (c)). In the shifted (testing) environment, we will have the green cube on the right part and the red cube on the left part. With the counterfactual data (to unobserved confounder) generated by our algorithm (Figure (c)), we can prevent the RL model from overfitting to the spurious correlation between the color and the position of the cube. ### **(4) Discussion about the limitation and potential improvement of our method for high-dimensional tasks.** > The current method is of great potential to be applied to more complicated problems. In particular, the proposed method requires swapping the dimensions of states to break spurious correlation, where it implicitly assumes that each dimension has semantic meaning. In more complicated problems such as high-dimensional image states, each dimension becomes a pixel without semantic meaning anymore and the dimension size may explode. To extend our method to such cases, we can leverage existing state abstraction techniques [5,6] to project images to a low-dimensional latent space, where each dimension represents a semantic feature. Then, our method can be straightforwardly applied to the low-dimensional latent space. Pdf: /pdf/fe863d737f30c887428dca252b39d87d9d2bb279.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Accept (poster)
Summary: Self-attention mechanisms serve as pivotal components in the domains of natural language processing and computer vision. Previous kernel function based self-attention variants, which are predicated on Mercer kernels, tend to overlook the inherent asymmetry of the attention matrix in the vanilla self-attention. In response to this oversight, this study introduces a novel approach, termed Primal-Attention. This approach leverages asymmetric Kernel Singular Value Decomposition (KSVD) to facilitate a low-rank approximation of the attention matrix, thereby incorporating an asymmetric kernel attention matrix. The empirical findings substantiate that Primal-Attention functions as a low-rank attention mechanism, exhibiting superior performance while maintaining low time complexity. Strengths: 1. This paper introduces a novel perspective that interprets the self-attention mechanism as a KSVD optimization problem. Weaknesses: 1. The exposition of methodologies in this paper lacks sufficient clarity. For instance, the crucial concept of primal optimization in self-attention with KSVD, as proposed in Equation 6, is introduced abruptly without a detailed discussion of the connection between KSVD and self-attention. 2. Algorithm 1, outlined in Appendix B.1, details the entire process of Primal-Attention and employs random projection. However, the omission of a citation for the Johnson–Lindenstrauss lemma in the paper is inappropriate. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why should the primal optimization function, as depicted in Equation 6, be considered? On what basis is this function proposed? Furthermore, why should this function be maximized rather than minimized? 2. Why is the formula for Primal-Attention in each head represented by Equation 11? The process of derivation appears to be missing. 3. How do the authors handle the positive diagonal matrix $\mathbf{\Lambda}$? Given that $\mathbf{\Lambda}$ is a key component in Equations 6 and 11, can the authors provide a detailed explanation of the process? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and the appreciation on the novelty of our work. We address your concerns point-wisely below. ***R.1 Concept of primal optimization with KSVD in eq.(6) and connections with self-attention.*** The modeling and optimization of KSVD on self-attention are under the LSSVM setup [1]. We extend its linear version (SVD) to the nonlinear one (KSVD), by starting from the primal optimization eq.(6) till deriving the dual problem eq.(7) for the primal-dual representation of self-attention in eq.(8). As presented in eq.(3), the attention outputs $o_i=\sum\nolimits_{j=1}^N v(x_j)K_{ij}$ are consistent with the dual representation of KSVD $e_i=\sum\nolimits_{j=1}^N h_{r_j}K_{ij}$, derived in eq.(8). We agree with the reviewer and will add more explanations in an early place before eq.(6). More explanations can also refer to line 115-122 on page 4, and Remarks 3.3-3.4 in paper. ***R.2 Citation of the Johnson–Lindenstrauss lemma in Algorithm 1.*** In Algorithm 1 with data-dependent projection weights, transformation matrix $f(X)$ is used for projection directions, as explained in line 128-132 on page 4. In experiments, we set $f(X):=X'$ with $X'$ a subset uniformly sampled from the rows of $X$. Indeed, the Johnson–Lindenstrauss Lemma [2] shows the main patterns of a matrix can be retained with the random linear projections. We appreciate the helpful referene and will cite it in paper. Other projection parameters are all optimized by SGD. ***R.3 Why and on what basis is eq.(6) proposed? Why maximization in eq.(6) but minimization in eq.(10)?*** Eq.(6) is based on the variational principle on SVD under the LSSVM framework [1], but extends the original linear SVD in [1] to the nonlinear KSVD with asymmetric kernels in dual, induced by two feature maps $\phi_q(x),\phi_k(x)$ in primal. The primal objective in eq.(6) maximizes the projection variances of $W_{e|X}^\top\phi_q(x),W_{r|X}^\top\phi_k(x)$ regarding queries and keys. This follows the idea of KSVD, because in dual,the corresponding right and left singular vectors learn the directions with the maximal projection variances, w.r.t. row and column data of the asymmetric kernel matrix $K:=[\left<\phi_q(x_i),\phi_k(x_j)\right>]$. The trace term in eq.(6) regularizes the primal variables $W_e,W_r$ and helps deriving the Lagrange function and the dual optimization eq.(7) via KKT conditions. This is the basis of eq.(6). Details can also refer to Remark 3.1, Theorem 3.2 and its proof in Appendix A.1. We derived that the primal optimization in eq.(6) can be solved through KKT conditions with stationary solutions in the dual optimization derived in eq.(7). As proved in Lemma 4.2, stationary solutions yield a zero-value objective $J$ in eq.(6), thus we do not need to solve the expensive SVD on the asymmetric kernel matrix $K$ in dual but intead we choose to optimize the objective $J$ in primal, i.e., eq.(6), to a zero value. In implementation of such an optimization, this zero-value objective can be flexibly realized by $\min J^2$ with efficient SGD-based algorithms. Therefore, in training, we adopt $\min L+\eta \sum_{l}J_l^2$, i.e., optimizing the loss $L$ with regularization loss terms $J_l^2$ promoting KSVD optimization. Relevant context can be referred in Theorem 3.2 and Lemma 4.2 in paper and their proofs in Appendix A. ***R.4 Why is the formula for Primal-Attention in each head represented by eq.(11)?*** In the primal optimization of KSVD for each head in eq.(6), the projection scores $e_i, r_j$ are given in the equality constraints. By replacing $e_i, r_j$ in the objective $J$ of eq.(6), we derive the unconstrained optimization with the objective as in eq.(11), which can be referred to line 124-127 on page 6. ***R.5 How to handle the positive diagonal matrix $\Lambda$ crucial in eq.(6) and (11)? Detailed explanation of the process?*** + Diagonal matrix $\Lambda$ serves as the inverse of the positive singular values $\Sigma$ of the attention kernel matrix $K$, i.e.,$\Sigma=\Lambda^{-1}$, shown in line 152-156 on page 5. Details can be found in Proof of Theorem 3.2, and Comments on Lemma 4.2 in Appendix. + In modeling, as attention outputs are formulated as projection scores along directions corresponding to those singular values (vectors), $\Lambda:= \Sigma^{-1}$ is indeed a key component reflecting the importance of each projection direction, exploring low-rank explanations. + In experiments, we assure the positivity of $\Lambda$ by defining a squared value of a diagonal matrix $\Lambda:=C^2$ and set it as learnable parameter in $J$ of eq.(11) for each head: $J=\frac{1}{2}\sum\_{i=1}^N \||( W\_{e|X}\Lambda^{\frac{1}{2}})^\top\phi_q(x_i)\||_2^2+\frac{1}{2}\sum\_{j=1}^N \||( W\_{r|X}\Lambda^{\frac{1}{2}})^\top\phi_k(x_j)\||_2^2-\text{Tr}(W_e^\top W_r).$ Note that given $\phi_q, \phi_k$, projection weights $W_{e|X},W_{r|X}\in\mathbb R^{p\times s}$ for the $s$ directions and $\Lambda\in \mathbb R^{s\times s}$ are optimized together by SGD to approach an zero-valued $J$. Here, $\Lambda$ imposes different importance on the $s$ projection directions. Due to the production between $W_{e|X},W_{r|X}$ and $\Lambda$, the optimized results only reflect the final results of $W_{e|X}\Lambda^{\frac{1}{2}},W_{{r|X}}\Lambda^{\frac{1}{2}}$, so the $\Lambda$ optimized by SGD is not necessarily the original inversed singular values. However, in empirical low-rank analysis, we can always compute the exact singular values of the self-attention kernel matrix $K$, as shown in Figure 1 in paper, where our Primal-Attention captures more information with less components than the canonical self-attention. [1] Suykens, J.A.K. “SVD revisited: A new variational principle, compatible feature maps and nonlinear extensions.” Applied and Computational Harmonic Analysis, 2016. [2] Lindenstrauss, W., and Johnson J. “Extensions of Lipschitz maps into a Hilbert space.” Contemp. Math, 1984. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have no further questions about this paper.
Summary: This paper proposes a new understanding of self-attention in transformers via asymmetric Kernel Singular Value Decomposition (KSVD). In particular, the authors formulate a primal-dual representation of self-attention for maximizing the projection variances in the attention outputs and then derive a new attention mechanism, namely the Primal-Attention, to avoid direct computation of the kernel matrix. Using KKT conditions, they prove the Primal-Attention can obtain a zero-value objective. Experimental results are provided to justify the advantage of the Primal-Attention. Strengths: 1. The idea of formulating self-attention as solving an KSVD problem is novel and of potentially high-impact. 2. The derivation of self-attention from KSVD given in the paper is elegant and correct. 3. The paper is well-written, easy to follow, and enjoyable to read. Weaknesses: 1. More large-scale experiments, e.g., on full ImageNet or WikiText103 language modeling, should be conducted. 2. Empirical analysis, such as the efficiency analysis of the Primal-Attention vs. the baselines, should be provided to help provide better understanding of the proposed method. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Can the proposed KSVD framework be used to explain other components in a self-attention unit, such as the residual connection, the layer normalization, and the feedforward network? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have not clearly addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's high appreciation of our work and the insightful comments, which will be addressed point-wisely below. **R.1 More large-scale experiments** As suggested, we test on ImageNet-1K and WikiText-103, both showing our promising potentials. Ours achieves the same accuracy as baseline with less memory; recall that our model also shows enhanced low-rank property as in Fig.1 in paper. On WikiText-103, our method with default setups achieves comparable performances with the well-tuned Flowformer, a latest SoTA model. **Table 1:** Test Acc. (%) on ImageNet-1K. |Model|Top-1 |Memory(GB)| |:-:|:-:|:-:| |DeiT-Small/16|79.8|14.2| |Primal.$+$DeiT-Small/16(ours)|79.8|14.0| **Table 2:** Perplexity on WikiText-103. |Model|Perplexity($\downarrow$)| |:-:|:-:| |Trans.(2017)|33.0| |Re.(2020)|33.6| |Per.(2021)|37.5| |Cos.(2022)|34.1| |Flow. w/o competition(2022)|31.2| |Flow. w/o allocation(2022)|32.2| |Flow.(2022)|30.8| |Primal.$+$Trans.(ours)|31.0| **R.2 More empirical analyses** **R.2.1 Efficiency** LRA is a popular benchmark in evaluating efficiency due to long data sequences. We test on LRA where ours shows better efficiency in running time and memory as in Table 3 in paper. Below, we provide more efficiency evaluations. + *UEA Times Series:* Ours show the best efficiency especially in memory where ours are more efficient with longer sequences, e.g., EthanolConcentration and SelfRegulationSCP2. + *D4RL Reinforcement Learning:* Our "Primal.$+$DT" achieves comparable time and memory efficiency as Decision Transformer (DT), while Flowformer is significantly less efficient. Recall that in Table 4 in paper, ours attains a much higher average reward of 77.5 than DT (72.2) and FlowFormer (73.5). **Table 3:** Running time and memory (GB) on UEA Time Series. |UEA benchmark (sequence length)|Trans.|Flow.|Primal.+Trans.|Primal.| |:-:|:-:|:-:|:-:|:-:| |EthanolConcentration (1751)|4.3|2.4|3.3|2.3| |FaceDetection (62)|10.7|9.8|5.7|6.9| |HandWriting (152)|0.3|0.3|0.3|0.4| |HeartBeat (405)|0.7|0.7|0.7|0.4| |JapaneseVowels (26)|0.5|0.6|0.5|0.5| |PEMS-SF (144)|0.6|0.7|0.7|0.8| |SelfRegulationSCP1 (896)|1.8|1.4|1.6|1.4| |SelfRegulationSCP2 (1152)|1.8|1.3|1.6|1.3| |SpokenArabicDigits (83)|3.7|4.4|4.0|4.5| |UWaveGestureLibrary (315)|0.3|0.3|0.3|0.4| |**Avg. Time(s/Epoch)**|2.5|2.2|**1.9**|**1.9**| ||Trans.|Flow.|Primal.+Trans.|Primal.| |:-:|:-:|:-:|:-:|:-:| |**Memory**|10.9|2.8|6.5|**2.7**| **Table 4:** Running time(s/1K-steps) and memory(GB) on D4RL. |Time|Medium-Expert|Medium|Medium-Replay| |:-:|:-:|:-:|:-:| |DT(reward: 72.2)|20.8|20.8|20.8| |Flow.(reward: 73.5)|54.4|54.4|54.3| |Primal.+DT(**reward:77.5**)|23.5|23.4|23.3| |Memory|Medium-Expert|Medium|Medium-Replay| |:-:|:-:|:-:|:-:| |DT|0.3|0.3|0.3| |Flow.|1.5|1.5|1.5| |Primal.+DT|0.3|0.3|0.3| **R.2.2 Other empirical analyses** + *Spectrum of self-attention kernels:* We plot cumulative explained variances of self-attention kernels on ImageNet-1K in Figure 1 in paper. Our method shows enhanced low-rank property, where more information can be captured within less components than the baseline. + *Ablation on $\eta$ and $s$:* Through the KSVD regularization coefficient $\eta$ in eq.(10) and the number of components $s$, we verify the effectiveness of our KSVD optimization in Tables 1, 2 in Sec.B.2 in Supplementary Material. + *Ablation on the projection weights:* We also compare the data-dependent and data-independent projections in Tables 3 and 4 in Sec.B.2 in Supplementary Material. + *Ablation on projection scores from left singular vectors:* We evaluate the results of using with (w/) and without (w/o) the projection scores ($r$-scores) involving left singular vectors. It shows that using both sets of projections (w/ $r$-scores) helps boost performance in learning asymmetric self-attention kernels. **Table 5:** Ablation on projection scores, i.e., $r$-scores, involving left singular vectors on LRA. |Primal.|ListOps|Text|Retrieval|Image|Pathfinder|Avg. Acc.| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |w/o $r$-scores|36.8|52.4|58.2|30.5|50.2|45.6| |w/ $r$-scores|37.3|61.2|77.8|43.0|68.3|**57.5**| |Primal.$+$Trans.|ListOps|Text|Retrieval|Image|Pathfinder|Avg. Acc.| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |w/o $r$-scores|37.1|65.1|79.2|42.8|72.8|59.4| |w/ $r$-scores|37.3|65.4|81.0|43.9|74.3|**60.4**| **R.3 Possible work on other components** Thanks for mentioning these interesting perspectives, which we would like to regard as possible future work. For instance, layer normalization might be in relation to orthonormality of singular vectors; the feedforward network and residual connection could be inspiring to connect with deep kernel machines[1,2]. Nevertheless, the authenticity requires rigorous analyses and experiments before arriving at a conclusion. **R.4 Limitations** The limitations can refer to the last paragraph in Supplementary Material. + In efficiency evaluations, e.g., on UEA time-series, we can note that our time efficiency gain is less distinctive on shorter-length sequence data than on longer ones. Further promoting the efficiency even on shorter-length sequence data can be an interesting future work. + Our Primal-Attention uses feature maps in primal, avoiding kernel computations in dual. However, it is not always easy to obtain or approximate feature maps of the kernel function, e.g., Gaussian kernel[3]. We applied feature maps of Cosine similarity kernels (see Remark 4.1 in paper). Although we achieved good performances in many datasets, more variants of feature maps can benefit wider applications. Thanks for this suggestion. We will elaborate it in the possible final version. [1] Allen-Zhu, Z., and Li, Y. "Backward feature correction: How deep learning performs deep learning." arXiv, 2020. [2] Tonin, F., Tao, Q., Patrinos, P., and Suykens, J.A.K. "Deep Kernel Principal Component Analysis for Multi-level Feature Learning." arXiv, 2023. [3] Rahimi, A., and Recht, B. "Random features for large-scale kernel machines." NeurIPS, 2007. --- Rebuttal Comment 1.1: Title: More Questions on the Efficiency Advantage of Primal Attention Comment: Thank you for your response. Could you please give further clarifications for the following point? 1. The memory advantage of Primal + DeiT-Small over the baseline DeiT-Small is not very significant while both obtain similar accuracies. Given this result, it is hard to claim that Primal + DeiT-Small is more efficient than the baseline. Can you also provide the running time analysis of Primal + DeiT-Small v.s. the baseline DeiT-Small for this experiment on ImageNet-1K? Similarly, how about the memory usage and running time for models trained on WikiText-103? 2. Why does Primal Attention gain more advantage in memory usage and running time on UEA Time Series than on D4RL and ImageNet-1K tasks? --- Reply to Comment 1.1.1: Title: Response to further questions on efficiency Comment: We thank the reviewer for the in-depth comments on further clarifications of the efficiency analysis. Below, we address the raised two points in detail. ### R.1 Efficiency analysis on large-scale datasets **Experiment setups and results** For ImageNet-1K and WikiText-103, we provide the training memory and time on a single V100 GPU in Table 1.1 and Table 1.2 below. In the experiment, we adopt the architecture of "Primal.$+$Backbone", where the last attention layer of the Transformer is replaced by our Primal-Attention. As in large-scale data and complicated tasks, less information compression can be desired in the learning, especially in shallow layers, we thereby implement our KSVD-based Primal-Attention in the deep layer. Relevant explanations can refer to the 1st paragraph in Sec.5 in paper. + *ImageNet-1K:* While attaining the same accuracy and an enhanced low-rank property (Fig.1 in the paper), the efficiency gain of our Primal.$+$DeiT-Small/16 is limited, as we only replace the last layer of the 12-layer baseline with our Primal-Attention. Hence, it is reasonable that the efficiency improvement is less significant than that of UEA and LRA where we either implement both layers in the 2-layer baseline by Primal-Attention ("Primal."), or replace the 2nd layer ("Primal.$+$"). + *WikiText-103:* Our "Primal.$+$Trans." shows similar efficiency as the 6-layer baseline, but significantly reduces the perplexity by $2.0$. Compared to the fine-tuned SoTA Flowformer (see Table 2 in our rebuttal), our "Primal.$+$Trans" is slightly inferior in perplexity by a small margin of 0.2, however, Flowformer is significantly less efficient by requiring $28.8 $% more running time than ours. The current "Primal.$+$Trans." is simply with default setups, better perplexity can still be expected by further tuning. **Table 1.1:** Efficiency analysis on ImageNet-1K. |ImageNet-1K|Top-1|Memory(GB)|Time(s/1K-steps)| |:-:|:-:|:-:|:-:| |DeiT-Small/16|79.8|14.2|2425.5| |Primal.$+$DeiT-Small/16|79.8|14.0|2330.2| **Table 1.2:** Efficiency analysis on WikiText-103. |WikiText-103|Perplexity($\downarrow$)|Memory(GB)|Time(s/1K-steps)| |:-:|:-:|:-:|:-:| |Trans.(2017)|33.0|9.0|3108.4| |Flow.(2022)|30.8|10.5|3998.4| |Primal.$+$Trans.|31.0|8.9|3104.0| **Further remarks** In summary, the efficiency gain of Primal-Attention implemented networks over baselines is influenced by two main factors: *1)* number of Primal-Attention layers used in the architecture, the more the better; *2)* sequence length of the training data, the longer the more significant. + With deep architectures, the efficiency can be further improved by replacing more layers with our Primal-Attention. Yet, in very deep Transformers, Primal-Attention is not necessarily always superior in performance when being applied to all layers, as the learning in shallow layers may not enjoy the benefits from the low-rank property from KSVD as much as the higher layers do. It would be interesting to explore a more generic implementation setup for Primal-Attention in very deep Transformers, as briefly mentioned in the last paragraph of possible future work in the Supplementary Material. + The length of the data sequence, i.e., $N$, is also a key influencing the efficiency. By avoiding the computation of the $N\times N$ attention matrix, our Primal-Attention can gain better efficiency on longer-sequence datasets. Although ImageNet-1K is large-scale, currently Transformers treat each image as a sequence of length 197 (with $\tt cls$ token), which is actually not too long (even compared to some UEA datasets as shown in Table 3 in our rebuttal). Hence, this is also a reason why our Primal.$+$DeiT-Small/16 does not improve the efficiency significantly. Similarly in WikiText-103, the data sequence length is 512, which is also not really long, hence the efficiency of our "Primal.$+$Trans." is not always superior under the current setups. ### R.2 Efficiency gain of "Primal.$+$" in different tasks The efficiency gain of "Primal.$+$" over baseline is more significant on UEA and LRA, as the backbone has only 2 layers, hence replacing one layer makes a difference to the overall architecture. Moreover, UEA and LRA in general have longer training sequence length, which would signalize Primal-Attention's efficiency. In contrast, the backbones on D4RL, WikiText-103 and ImageNet-1K have more layers where canonical self-attention layers are the majority structures in "Primal.$+$" as shown in Table 2.1 below. Besides, the efficiency gain is less significant also due to the shorter training sequence lengths on these datasets. Explanations can refer to the response R.1 above. **Table 2.1:** Architecture of Primal.$+$. |Primal.$+$|canonical_layer+[primal_layer]|num_head|head_dim| |:-:|:-:|:-:|:-:| |UEA|1+[1]|8|64| |LRA|1+[1]|2|32| |D4RL|2+[1]|4|64| |WikiText-103|5+[1]|8|64| |ImageNet-1K|11+[1]|6|64|
Summary: This paper proposes a Primal-Attention method to realize self-attention blocks in Transformer with a kernel matrix. It firstly explains the relationships between self-attention and asymmetric kernel matrix. Secondly it formulates self-attention in the form of kernel SVD and derive its primal and dual representations. And eventually it proposes Primal-Attention, which uses not only the projection scores involving the right singular vectors of the asymmetric attention kernel K, but also the ones involving the left singular vectors of the asymmetric attention kernel K. Besides, it conducts several experiments on several tasks. Strengths: 1. The motivation to formulate a neural network structure into a kernel one is quite interesting and meaningful. 2. The primal-dual representations are derivable. Weaknesses: 1. The explanation of why the projection scores involving the left singular vectors of the asymmetric attention kernel K are used is not adequate. It lacks enough analyses to prove their importance. 2. The results of experiments show little improvements compared with baseline methods. And only LRA benchmark has efficiency analyses. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Why are the projection scores involving the left singular vectors of the asymmetric attention kernel K so important? What does it mean physically by doing so? 2. Why do you conduct experiments of efficiency only on LRA benchmark? And why does Primal+ have less efficiency than Primal+Trans? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Below, we address the raised two main concerns in detail. ***R.1 Incorporating projection scores involving left singular vectors of the asymmetric attention kernel matrix $K$.*** We provide detailed explanations and empirical evidences below. Relevant context can refer to line 164-182 on page 5 in paper. + In Section 2 in paper, we present the self-attention weights $K_{ij}=\kappa(x_i, x_j)=\text{softmax}(\left< q( x_i),k(x_j) \right>/\sqrt{d_k})$, where $K$ can be regarded as a kernel in relation to queries and keys via asymmetric measures $\kappa(x_i, x_j)\neq\kappa(x_j,x_i)$. Given an asymmetric matrix $K$, there naturally exists information from two ways w.r.t. row space and column space [1]. This is in contrast to symmetric cases, e.g., Kernel PCA that only explores row data with a symmetric kernel [2]. Moreover, from the low-rank approximation perspective of KSVD to the self-attention kernel, utilizing both left and right singular vectors leads to the optimal approximation [3]. Hence, incorporating both sides of singular vectors could provide more possibilities to exploit comprehensive information, so as to benefit the learning of asymmetric kernels for boosted performances. + Empirically, we conduct an ablation study with (w/) and without (w/o) projection scores, i.e., $r$-scores, involving the left singular vectors. Table 2pAx-1 shows that using both projections (w/ $r$-scores) helps boost performances, verifying our effectiveness in learning with asymmetric self-attention kernels. **Table 2pAx-1:** Ablation on the projection scores, i.e.,$r$-scores, involving left singular vectors on LRA with Top-1 test accuracy (%). |Primal.|ListOps|Text|Retrieval|Image|Pathfinder|Avg.Acc.| |:-:|:-:|:--:|:-:|:-:|:-:|:-:| |w/o $r$-scores|36.8|52.4|58.2|30.5|50.2|45.6| |w/ $r$-scores|37.3|61.2|77.8|43.0|68.3|**57.5**| |Primal.$+$Trans.|ListOps|Text|Retrieval|Image|Pathfinder|Avg.Acc.| |:-:|:-:|:--:|:-:|:-:|:-:|:-:| |w/o $r$-scores|37.1|65.1|79.2|42.8|72.8|59.4| |w/ $r$-scores|37.3|65.4|81.0|43.9|74.3|**60.4**| ***R.2.1 Performance improvement*** In Tables 1, 2, 4 and 5 in paper, our method beats all compared methods including the very recent Transformers. In Table 1 on UEA Time Series, our method significantly improves over the baseline Transformer and also better than most other Transformers with a clear margin. Although our improvement is less distinctive than the very latest Flowformer (Flow.), it is reasonable to have comparable results on this simple benchmark with small-scale and short-sequence data. On LRA, D4RL and ImageNet-100 in Tables 2, 4 and 5 with more complex datasets, ours shows more substantial improvements. Notably, in reinforcement learning, our method has distinctively better performance than Flow. by a large margin (4.0%) in Table 4. ***R.2.2 Efficiency analyses*** + Long-range Arena (LRA) is a popular benchmark for evaluating efficiency in Transformers due to its long data sequence, which is a key element determining the kernel size and computation efficiency. Hence, we test on LRA in paper. + In Table 3, "Primal." indicates that our Primal-Attention is applied to all attention layers, while in "Primal.$+$Trans." only the last layer in Transformers is replaced by Primal-Attention, as explained in line 219-229 on page 6. Thus, "Primal." shows to be more efficient than "Primal.$+$Trans.". + We also present efficiency analyses on other datasets with comparisons to the baseline Transformer, and the most recent state-of-the-art Flowformer. **UEA Times Series**: Our methods show the best efficiency among all compared methods. “Primal.$+$Trans.” improves over baseline Transformers (Trans.) by decreasing peak memory from 10.9GB to 6.5GB, while our “Primal.” even reduces it to 2.7GB and becomes more efficient with longer sequences, e.g., EthanolConcentration and SelfRegulationSCP2. **D4RL Reinforcement Learning**: Our "Primal.$+$DT" achieves comparable time and memory efficiency as the baseline Decision Transformer (DT), while Flowformer (Flow.) shows significantly lower efficiency. Recall that in Table 4 in paper, our method achieves a much better average reward of 77.5 than DT (72.2) and Flow. (73.5). **Table 2pAx-2:** Comparisons on running time (s/Epoch) and memory consumption (GB) on UEA Time Series. |UEA benchmark (sequence length)|Trans.|Flow.|Primal.+Trans.|Primal.| |:-:|:-:|:-:|:-:|:-:| |EthanolConcentration (1751)|4.3|2.4|3.3|2.3| |FaceDetection (62)|10.7|9.8|5.7|6.9| |HandWriting (152)|0.3|0.3|0.3|0.4| |HeartBeat (405)|0.7|0.7|0.7|0.4| |JapaneseVowels (26)|0.5|0.6|0.5|0.5| |PEMS-SF (144)|0.6|0.7|0.7|0.8| |SelfRegulationSCP1 (896)|1.8|1.4|1.6|1.4| |SelfRegulationSCP2 (1152)|1.8|1.3|1.6|1.3| |SpokenArabicDigits (83)|3.7|4.4|4.0|4.5| |UWaveGestureLibrary (315)|0.3|0.3|0.3|0.4| |**Avg. Time (s/Epoch)**|2.5|2.2|**1.9**|**1.9**| |UEA benchmark|Trans.|Flow.|Primal.+Trans.|Primal.| |:-:|:-:|:-:|:-:|:-:| |**Memory (GB)**|10.9|2.8|6.5|**2.7**| **Table 2pAx-3:** Comparisons on running time (s/1K-steps) and memory consumption (GB) on D4RL. |Time |Medium-Expert|Medium|Medium-Replay| |:-:|:-:|:-:|:-:| |DT (reward: 72.2)|20.8|20.8|20.8| |Flow. (reward: 73.5)|54.4|54.4|54.3| |Primal.+DT (**reward:77.5**)|23.5|23.4|23.3| |Memory|Medium-Expert|Medium|Medium-Replay| |:-:|:-:|:-:|:-:| |DT|0.3|0.3|0.3| |Flow.|1.5|1.5|1.5| |Primal.+DT|0.3|0.3|0.3| [1] Suykens, J.A.K. "SVD revisited: A new variational principle, compatible feature maps and nonlinear extensions." Applied and Computational Harmonic Analysis, 2016. [2] Schölkopf, B., Alexander S., and Klaus-Robert M. "Nonlinear component analysis as a kernel eigenvalue problem." Neural computation, 1998. [3] Eckart, C., and Gale Y. "The approximation of one matrix by another of lower rank." Psychometrika, 1936. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I still have some questions. In the Table 2pAx-1, the performance of Primal with both projections is significantly worse than the Primal+Trans without both projections but has projection scores involving right singular vectors in Transformer, is it means that the projection scores involving the left singular vectors has bad influence on the performance? On the other hand, from the results of Primal+Trans in Table 2pAx-1, utilizing the both projections only has slightly better performance on two tasks, while shows negligible improvements on the remaining three task. The necessity of utilizing left singular vectors should be explained more intuitively. --- Reply to Comment 1.1.1: Title: Further response to the benefits of left singular vectors Comment: We thank the reviewer for the reply. We provide the following 3 aspects to address your concerns and eliminate possible confusions. ### 1. Projection scores involving left singular vectors are beneficial to the performance. For clarity, in Table R.1 we summarize the setups and architectures of Table 2pAx-1. We use the 2-layer backbone baseline commonly used on LRA. To make the ablation more comprehensive, we **respectively** conduct this study on the two main architectures in this work, i.e., Primal. and Primal.$+$Trans. (see the 1st paragraph of Sec.5 in the paper for details). **Table R.1:** Ablation setups on LRA. |Primal.|w/o $r$-scores|w/ $r$-scores| |:-:|:-:|:-:| |Layer 1|[right]|[right;left]| |Layer 2|[right]|[right;left]| |Avg. Acc.|45.6|**57.5**| |Primal.$+$Trans.|w/o $r$-scores|w/ $r$-scores| |:-:|:-:|:-:| |Layer 1|canonical attention|canonical attention| |Layer 2|[right]|[right;left]| |Avg. Acc.|59.4|**60.4**| The two subtables in Table 2pAx-1 are the ablation study of using the left singular vectors on **two different network architectures respectively**, so that their results are not directly comparable under the setups in this ablation. Therefore, it is not the left singular vectors that bring bad influence on the performance, and such evaluations should be considered separately in these two subtables. More specifically, 1. Using the projections of left singular vectors, i.e., (w/ $r$-scores), is helpful to the performance, when Primal-Attention is applied to all layers (Primal.) and also when it is applied only to deep layer (Primal.$+$Trans.). The role of the left singular vectors is more influential in Primal. for boosting the performance, where all canonical attention is replaced with the KSVD-based Primal-Attention. 2. As mentioned by the reviewer, Primal.(w/ $r$-scores) has inferior performance than Primal.$+$Trans.(w/o $r$-scores). This is due to: Primal.(w/ $r$-scores) applies low-rank KSVD to the first layer, where the learning in this shallow layer does not always desire for the low-rank property from KSVD as much as the deep layer does on LRA. ### 2. Performance gain is substantial. From the results of Primal.$+$Trans. in Table 2pAx-1, the mentioned performance improvement is substantial on LRA. 1. *We obtain 1% gain (59.4% $\to$ 60.4%) in average accuracy:* In Table 2 in paper, recent SoTA method YOSO-E has only 0.4\% gain compared to the baseline. Hence, 1% gain can be regarded as substantial on LRA. 2. *LRA benchmark should be evaluated as a whole:* Since there are five different tasks in this benchmark, the average accuracy is the most conclusive measure, which is also attached great importance by almost all SoTA methods. In Table 2 in paper, YOSO-E even has inferior performance on Pathfinder dataset than baseline. However, this cannot diminish its overall better performance on this benchmark. 3. *In each individual task, using both projection scores consistently brings improvement*: As in Table R.2, the accuracy gain is significant on at least 3 tasks: Retrieval, Image and Pathfinder; besides, using both projection scores consistently improves performances in all tasks, which is substaintial for the LRA benchmark. **Table R.2:** Acc.(%) gain of Primal.$+$Trans.(w/ $r$-scores) over Primal.$+$Trans.(w/o $r$-scores). |Primal.$+$Trans.|ListOps|Text|Retrieval|Image|Pathfinder|Avg.Acc.| |:-:|:-:|:--:|:-:|:-:|:-:|:-:| |Acc.$\uparrow$|0.2|0.3|1.8|1.1|1.5|1.0| ### 3. Necessity remarks 1. **Theoretically,** the derived KSVD can be regarded as a low-rank approximation to the self-attention kernel matrix where utilizing both left and right singular vectors leads to the optimal approximation. 2. **Experimentally,** using both projection scores brings substantial performance gain than using only one. 3. **Intuitively,** using both projection scores can be treated as considering both directions in a directed graph. Section 2 in the paper presents the asymmetric attention weight $K_{ij}=\kappa(\boldsymbol x_i, \boldsymbol x_j)=\text{softmax}(a(\boldsymbol x_i, \boldsymbol x_j))$ with $a(\boldsymbol x_i, \boldsymbol x_j)=\left<q(\boldsymbol x_i), k(\boldsymbol x_j)\right>/\sqrt{d_k}$. This shows that the attention kernel $K$ can be interpreted as message passing in a directed graph in relation to queries and keys, with asymmetric similarity measures $\kappa(\boldsymbol x_i, \boldsymbol x_j)\neq\kappa(\boldsymbol x_j, \boldsymbol x_i)$. Using only the right singular vectors means considering only one directionality, where additional information might reside in another directionality to enhance the learning on such an asymmetric kernel matrix. Useful references can be checked on page 128-129 in [1] for graph theory. In the asymmetric case, it is natural to consider directed information, i.e., both right and left singular vectors as in our case, for enhancing performance. [1] Estrada, E. "The structure of complex networks: theory and applications." Oxford University Press, 2012.
null
null
Rebuttal 1: Rebuttal: Dear Program Chairs, Area Chairs, and Reviewers, First of all, we would like to thank you for your time and valuable comments, which help improving our work. In this work, we provide a new framework to interpret self-attention in Transformers via asymmetric Kernel Singular Value Decomposition (KSVD). Our work addresses the intrinsic asymmetry residing in self-attention and fills the gap of most of existing works on Transformers resorting to the classic techniques using symmetric Mercer kernels. With KSVD, a primal-dual representation of self-attention is formulated and its corresponding primal and dual optimization problems are also cast by maximizing the projection variances in the attention outputs. Based on the derived analytical results, a new attention mechanism is proposed with improved efficiency and performances, namely the Primal-Attention, and its optimization is also incorporated through a regularization term with enhanced low-rank properties. We are grateful that the reviewers unanimously regard our work as novel and interesting towards understanding self-attention and of potentially high-impact. The review comments mainly raise to present more empirical evaluations and to elaborate some technical details. In the rebuttal, we have addressed the comments from all reviewers point-wisely. To be more specific, *i)* we present more experimental results that all support the superiority of our method, including efficiency analyses, large-scale datasets, ablation on the incorporation of projection score involving the left singular vectors (to **Reviewer 2pAx, Reviewer 8hph**); *ii)* we provide further explanations to better introduce the primal and dual optimization in Eq.(6)-(7) and the objective of Primal-Attention in Eq.(10)-(11) based on the property from stationary conditions in Lemma 4.2 (to **Reviewer aphr**). We hope that our detailed responses could well address the comments and be assessed by Chairs and Reviewers. We sincerely look forward to further discussions with the reviewers. Best wishes, Anonymous author(s) of Paper3761
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Distributionally Robust Bayesian Optimization with $\varphi$-divergences
Accept (poster)
Summary: The authors extend the framework of distributionally robust Bayesian optimization to the case where the distribution distance notion amounts to $\phi$-divergences, which encompasses the Kullback-Leibler divergence, total variation and $\chi^2$-divergence. In particular, the paper aims at providing a computationally tractable algorithm for the maximization of a reward function that additionally depends on a context parameter that is drawn from a distribution with respect to which the procedure is supposed to be distributionally robust. They build on the paper [23] by Kirschner et al., where a similar problem was considered and where an efficient algorithm based on convex optimization was developed for the said problem if robustness is considered with respect to the maximum mean discrepancy distance. The main result of the paper is Theorem 1, which allows to rewrite the maximization of the distributionally robust objective (DRO) as maximization of a standard stochastic optimization objective correct by a variance term in the cases of total variation and $\chi^2$-divergence. This is the result of a characterization of the infimum in the DRO using complex conjugates of the $\phi$-function. Furthermore, a robust regret bound is derived and some numerical experiments are conducted for simple, standard reference function such as the Rosenbrock and cosine functions. Strengths: The main strength of the paper lies from my view in the fact that the authors provide tractable characterizations of the distributionally robust objective (see page 4) if the distributional robustness is measured with respect to the KL, total variation and $\chi^2$-divergences. I consider the computations that lead to these characterizations to be technically sound. This extends prior work by Kirschner et al. that considered only distributional robustness with respect to the maximum mean discrepancy. Weaknesses: A main weakness of the submission is that despite the claim that its results go beyond the finiteness assumption on the set of contexts [23], this is not the case: In the definition of (4) and (5), the size of the set (which is called space) of contexts is implicitly assumed to be finite due to the summation over c and the division by the cardinality |C| of the context set. This can be, for example, also seen in line 168 of page 4 where the finiteness assumption on the reference distribution is made explicit. A clarification or weakening of the claims that is necessary from my point of view. For a through understanding of the bound on the robust regret presented in Theorem 2, a clear comparison to the bounds obtained in other papers such as [23] would be desirable. The experiments that are considered appear rather consider very low-dimensional functions without a clear link to a machine learning related topic. The presentation of the paper lacks some clarity and suffers from challenges in its use of the English language. Furthermore, at several occasions, notation is used that is not explained or only much later in the manuscript. We list a few issues below: - Second sentence is not grammatically correct - "computed, however cannot be replaced by another choice of D whose closed form is not readily accessible with samples such as the ϕ-divergence" - "samples such as the $\phi$-divergence - $\varepsilon$ without dash in line 70, page 2 - superfluous bracket in p. 3, line 119 - "include the Wasserstein". Probably some sort of Wasserstein distance is referred to here, but is so far unclear - "Total Variance", line p.4, line 153 - The convex conjugate would profit from an appropriate evaluation - "the same optimization problem" To which comparison optimization problem is this referencing? - "that" missing on p.6, line 253 - In (4) and (5), a lot of quantities are not defined, including $\mu_t$, $\beta_t$ (in the mentioned reference [47], that parameter is not defined explicitly either) and $\sigma_t$. - It is unclear what is needed exactly in Algorithm 1 to run the algorithm. Do we need to know the values of $\varepsilon_t$, the reference distribution p_t, the kernel? - P. 3, line 106: $\mathcal{X} \in \mathbb{R}^d$ is probably not correct here Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Why is the maximum information gain defined differently than in Kirschner et al.? Elaborate. - Please rework the presentation of the manuscript to better clarify the underlying assumptions, missing definitions and to address grammar issues. - The motivation of the comparisons of the different DRBOs in Figure 2 and Figure 3 is currently unclear: The different curves correspond to different settings (balls of distributions with respect to different distance notions). Why is interesting to put the robust regret values into the same plots? - Why is it claimed that the algorithm works in the continuous context regime, when the quantities defining the acquisition functions assume finiteness of the context set? - Could you comment on the computational complexity with respect to the dimensionality or size of the input space? In the preliminaries, it is assume that the input space lives in a d-dimensional ambient space. Does d play any role in quantities estimated in the paper? - The paper would benefit from an elaboration on the significance of the regret bound. In this context, why is the robust regret a meaningful metric to measure the success of the procedure? Considering the definition of the DRBO problem, it might be beneficial to rather consider the (robust) objective value error of the last iteration T. - "In particular, we also present a similar analysis, showing that the robust regret decays sublinearly for the right choices of radii." It is unclear to me why the regret bound from Theorem 2 corresponds to any type of decay. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors mention the question of the appropriate choice of the distributional robustness distance for specific settings as future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and pointing out the computational superiority of our method with the phi-divergence generalization. Indeed, while our work attempts to alleviate the finiteness of contexts assumption, we provide some clarification and discussion below regarding the discretization argument which will be added to the updated version - we appreciate your concern in bringing this up. ___ Question: “Do we need to know the values of , the reference distribution $p_t$, the kernel?” Answer: Yes, these are required to compute the quantities. ___ Question: “Why is the maximum information gain defined differently than in Kirschner et al.? Elaborate.” Answer: The minor difference between maximum information gain in our paper and Kirschner et al. is the noise variance term $\sigma{-2}$ for which we follow the original maximum information gain presented in Srinivas et. al. ___ Question: “Why is it claimed that the algorithm works in the continuous context regime, when the quantities defining the acquisition functions assume finiteness of the context set?” Answer: The theorem we develop makes no finiteness assumption of the context space and so in theory when using the proposed objective, we will be robust to other distributions with continuous support. As you mention we discretize to estimate the expectation and variance terms in practice, which seemingly appears to violate this. However, noting that since the acquisition functions are all bounded (by a constant M, which is standard as in BO), an elementary application of McDiarmind’s inequality [1] allows us to bound the difference between the discretized variation and continuous one by a factor of $O\left(\frac{M}{\sqrt{n_D}}\right)$ (with high probability) where $n_D$ is the number of discretizations we perform. Therefore, the discretization forms a very close approximation to the true quantities in Theorem 1 and can apply to continuous context regimes such as $C = [0,1]$. We remark that a similar argument can be applied to Kirschener et. al. regarding continuous contexts however will require solving a linear program whose variable size is of $O(n_D)$, and thus a linear program solver will be very expensive. Therefore, the number of discretizations in their work cannot be too large. In contrast, we can perform a much more fine-grained discretization since we have a simple expression for our DRO objective, which can be computed in linear time $O(n_D)$. Thank you for this point, we believe this discussion will clarify and strengthen the contribution. [1] Doob, J. L. (1940). "Regularity properties of certain families of chance variables" (PDF). Transactions of the American Mathematical Society. 47 (3): 455–486. doi:10.2307/1989964. JSTOR 1989964. ___ Question: Could you comment on the computational complexity with respect to the dimensionality or size of the input space? In the preliminaries, it is assumed that the input space lives in a $d$-dimensional ambient space. Does $d$ play any role in quantities estimated in the paper? Answer: the optimization complexity grows with the input dimension d. When d increases, our regret bound is loose. In Theorem 2, when $d$ increases, the maximum information gain \gamma_t (defined in Line 267, 268) will increase with d and thus the upper bound will worsen. ___ Question: It is unclear to me why the regret bound from Theorem 2 corresponds to any type of decay. Answer: We refer to this as "decay" since if one chooses $\varepsilon_t = \Gamma_{\varphi}^{-1}\left(\frac{1}{\sqrt{t} + \sqrt{t+1}}\right)$, then the overall regret becomes will yield a rate of $\sqrt{T}$ which is considered state-of-the-art. ___ Question: “The paper would benefit from an elaboration on the significance of the regret bound. In this context, why is the robust regret a meaningful metric to measure the success of the procedure? Considering the definition of the DRBO problem, it might be beneficial to rather consider the (robust) objective value error of the last iteration $T$. “ Answer: we have followed the literature in BO/DRO to use the cumulative robust regret as a metric. Thank you for the suggestion. However, we think the robust objective value at the last iteration T may not be appropriate because the last point is not necessarily the optimal solution of the problem and we do not pick the last point, i.e. $x_T$ as the final $\arg\max f(\cdot)$. ___ We feel we have addressed all of the concerns raised. If this is not the case, please let us know so that we can have the opportunity to discuss further. --- Rebuttal Comment 1.1: Comment: Thank you for your reply, which clarifies some of the questions I had satisfactorily. I maintain my rating as after reading the authors' rebuttal, my main concern about the paper's claims, the question whether the considered DRO-BO problem is equivalent to a finite-dimensional optimization problem even in the continuous context setting, has not been positively clarified. Furthermore, my question > The motivation of the comparisons of the different DRBOs in Figure 2 and Figure 3 is currently unclear: The different curves correspond to different settings (balls of distributions with respect to different distance notions). Why is interesting to put the robust regret values into the same plots? remained unaddressed. --- Reply to Comment 1.1.1: Comment: Dear Reviewer NSv3, Thank you for letting us know that we have clarified some of the questions satisfactorily. We below address two remaining questions. --- Regarding our claim of "infinite dimensional optimization problem reducing to a finite dimensional variable", we mean this in terms of the size of optimization variables (while Q is infinite dimensional, only $\lambda$ and $b$ remained in the reduced problem). However, in the event where $p_t$ is continuous and we discretize $p_t$ or when $p_t$ is finitely supported, the Theorem still reduces the optimization variable number from the size of the support (which can be large for many samples/discretizations) to just $\lambda$ and $b$. This is a significant improvement computationally compared to [1]. Thank you for pointing this out as this key advantage should be highlighted in the paper.. --- Regarding the question: The motivation of the comparisons of the different DRBOs in Figure 2 and Figure 3 is currently unclear: The different curves correspond to different settings (balls of distributions with respect to different distance notions). Why is it interesting to put the robust regret values into the same plots? remained unaddressed. Apologies for overlooking this question in the first response. Thanks for bringing it up again and giving us the opportunity to clarify. In making the comparison in Fig 2 and 3, we follow the literature in DRBO [1] to compare the performance using the robust regret over iteration axis. Comparing different optimization algorithms or settings based on their performance over iterations helps in identifying which algorithms are more efficient for a particular problem, e.g., some algorithms might converge faster initially but slow down later, while others might converge more steadily throughout the optimization process. Note that this way of comparison is very popular in the Bayesian optimization community [2] – the primary setting considered in our paper. Having said that, we are also open to your suggestion on the alternative comparison. What would be a better choice for comparison across the DRBO methods? --- We thank you again for your time, we hope we have been able to convince you of our contributions. Otherwise please let us know, if there is any concern left. [1] Kirschner, Johannes, et al. "Distributionally robust Bayesian optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. [2] Srinivas, Niranjan, et al. "Gaussian process optimization in the bandit setting: no regret and experimental design." Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.
Summary: In this paper the authors extend the domain of distributionally robust bayesian optimization (DRBO) as introduced by Kirschner et al. to the case of distributions with continuous support. The focus on the case of $\phi$-divergences and show that for these problem then DRBO problem can be reformulated in closed form using the convex conjugate of the function $phi$. They then focus on the case of $\chi^2$-divergence and the total variation metrics and show that the distributionally robust reformulations for these problems are equivalent to regularization problems. For the general case they also provide a bound on the robust regret, The conclude by illustrating their methods through numerical experiments. Strengths: The results provided by the authors are novel extend the applicability of distributionally robust optimization significantly and as such form a significant contribution. The paper is well written and clearly illustrates the key concepts. **Additional Comments** Most existing work that looks at distributionally robust optimization with continuous support focuses on problems with finite distributions. In this paper the authors have focused on the case of continuous support which is significantly more challenging. For this setting they have provided a general robust reformulation along with specific reformulations for specific uncertainty sets. These new reformulations cane be optimised by straightforward first order methods without compromising on the structure of the original problem. They is a key novel contribution of their work. Weaknesses: The numerical results while interesting didn't seem to address the benefits of extending the results to continuous support as compared to discrete support. Since this is a key contribution of this paper it would be better to see results illustrating the benefit of this. **Additional Comments** As I said in my previous review, the numerical experimented presented by the authors seem to present a general comparison against robust methods. While this is okay, I feel it misses the key point of the work which I feel to be the reformulation of DRO problem with continuous support. As such I would have been better for the paper if the authors had numerically illustrated the benefit of such a reformulation over a continuous set. For example, is reformulation over this continuous support set better then simply reformulating over a finite set created using samples. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would be interested in knowing if the authors have tried to identify the structure of the worst case distribution. 2. It would also be interesting to know how many of the $phi$-divergences can be converted into regularization problems. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and noting our key contributions. Regarding our numerical experiments: since existing methods only focus on finite context sets, we showcase the benefits of our methods on such datasets. We then present an additional experiment on a continuous context where $p_t$ is selected to be uniform over $[0,1]$ where we do show that indeed, reformulating over continuous support sets is better than simply creating a finite set using samples. --- Question: I would be interested in knowing if the authors have tried to identify the structure of the worst case distribution. Answer: In the case of KL-divergence, the worst-case distribution will resemble an exponential family whose base measure is center distribution which has been studied in the context of label-shift such as in [1]. For general $\varphi$-divergences, we suspect the distribution will be some generalization of this under differentiable assumptions on $\varphi$. --- Question: It would also be interesting to know how many of the $\phi$-divergences can be converted into regularization problems. Answer: This is a very good question since under the assumption of $\varphi$ being twice differentiable, a well-known study on $\varphi$-divergences (see [2] and Remark 4 of [3]) has shown that $\varphi$ (via Taylor expansion) can be approximated with the chi-squared divergence. Since the chi-squared divergences yields a variance regularization term, an approximation argument under the assumption that $\varphi$ is twice-differentiable, will lead to regularization. Therefore, we conjecture under smooth choices of $\varphi$, DRO-BO admits a regularization interpretation. Thank you for this point, we will add this discussion into the paper. --- [1] Zhang, J., Menon, A., Veit, A., Bhojanapalli, S., Kumar, S., & Sra, S. (2020). Coping with label shift via distributionally robust optimisation. arXiv preprint arXiv:2010.12230. [2] Nielsen, F., & Nock, R. (2013). On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Processing Letters, 21(1), 10-13. [3] https://people.lids.mit.edu/yp/homepage/data/LN_fdiv_short.pdf --- Rebuttal Comment 1.1: Title: Response. Comment: I thank the authors for their response. I will maintain my review as it is.
Summary: This work studies distributionally robust Bayesian optimization (DRO-BO) problems with $\varphi$-divergences which cover $\chi^2$-divergence, total variation distances and KL divergence. The authors show that the minimax DRO-BO problem has an equivalent minimization problem, and propose an algorithm for solving the special cases of $\varphi$-divergences. They complement their theoretical results with numercial experiments comparing against existing methods. Strengths: * This work presents an interesting perspective of DRO-BO by offering a reformulation into an equivalent minimization problem through convex conjugates, which opens up possibilies of efficient algorithms for solving DRO-BO. Weaknesses: I have no major concerns. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have addressed the limitations according to the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time in reviewing and positive comments towards our work.
Summary: The paper proposes a new approach for Distributionally Robust Bayesian Optimization. The paper address the problem of data shift in phi divergence which generalizes better than previously studied types and subsumes other known divergences categories including chi^2 divergence, Total Variation, and the extant Kullback-Leibler. The paper proposes new expressions for acquisition functions under two types of divergences ( chi^2 divergence, Total Variation), A theoretical analysis showing the problem reduction to a minimization problem, and a regret-bound analysis. Strengths: + The paper provides a theoretical analysis that reduces the computationally intractable problem of data shift in the context of BO to a tractable simple optimization problem. + The paper is overall well-written and self-sufficient. The preliminaries section covers the needed technical definitions for the rest of the paper for readers unfamiliar with the problem details. Writing can be enhanced by adding comparison and discussion about contextual BO. + Interesting and technically solid paper. The final acquisition function expressions are simple yet the impact and generality of the approaches to the new types of divergences are important. I consider this an advantage since it becomes more amenable to execution. + The paper provides an adaptive expression for the acquisition function hyperparameter \epsilon leading to a hyperparameter "free" approach when there is no prior knowledge from the user about the suitable value of \epsilon. Weaknesses: + There is common motivation with contextual BO beyond the robustness literature. This is clear in the motivation but seems to be ignored later as relevant methods for comparison both on the qualitative and quantitative levels. Please address this part. I believe it is crucial and might be confusing to readers who are familiar with only one line of work and not the other. + The experimental setup is limited in terms of baselines and applications: + The baselines from Regular BO included only one acquisition function. + The StableOpt baseline was omitted from the power wind experiment and computational time comparison. + There are only two synthetic experiments and two real-world experiments. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: + It is not clear if, for the experiments in figures 3 and 4, the \epsilon was varied based on its theoretical expression or was set to a fixed value in advance + Is there a reason for omitting some of the baselines including StableOpt from the wind power experiment and the computational time comparison? + Why is the impact of the hyperparameter C studied only for the wind power experiment? Please refer to the weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and inclination to accept this paper. We will include additional motivation for contextual BO and hope that the below answers address your concerns regarding the experiments. --- Question: “It is not clear if, for the experiments in figures 3 and 4, the \epsilon was varied based on its theoretical expression or was set to a fixed value in advance” Answer: Indeed, the \epsilon is set following the theoretical expression (defined in Lines 273-274) in Figures 3 and 4. --- Question: “Is there a reason for omitting some of the baselines including StableOpt from the wind power experiment and the computational time comparison?” Answer: We omit the plot for StableOpt in the Wind Power to avoid occlusion from the plots as well as focusing on the behaviors of different DRBO variants. --- We will include the computational time comparison for StableOpt which relies on the worst outcome which can be readily estimated from the GP surrogate model. Therefore, StableOpt is quite efficient, like the standard BO, in terms of computational complexity. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I think StableOpt should be included in the comparison for the wind power experiment especially since it is the only real-world experiment. Based on the current figure, it does not seem that adding it will cause any occlusion. --- Reply to Comment 1.1.1: Title: Further response to Reviewer Yhe4 Comment: We thank the Reviewer for the suggestion of including StableOpt for the Wind Power experiment. We have run this experiment and get the figure with StableOpt ready. As per NeurIPS instruction, the authors are not allowed to add the external link to the figure during the response and the option of uploading the PDF file is expired after 9th August. In general, our DRBO approaches still perform better than StableOpt which only looks at the worst-case scenarios. In the final version of the paper, we will update the Figure 4 including StableOpt for Wind Power experiment.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their time and efforts in reviewing our work. The reviewers are majority in acceptance of the work, as they have noticed our main contribution which is to provide “a theoretical analysis that reduces the computationally intractable problem of data shift in the context of BO to a tractable simple optimization problem.” (Reviewer Yhe4), develop an “efficient algorithm for solving DRO-BO” (Reviewer xPHX) specifically “on the case of continuous support which is significantly more challenging” (Reviewer DEDT) which “extends prior work by Kirschner et al. that considered only distributional robustness with respect to the maximum mean discrepancy.” (Reviewer NSv3). Thus the reviewers have particularly commended the novelty and clarity. However, it has come to our attention that there are certain points of clarifications regarding the discretization of contexts and missing references that the reviewers have pointed out. We will add these points and hope we have addressed the concerns of the reviewers. If this is not the case, please let us know so that we can have the opportunity to discuss further.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides distributional robustness in the context of the Bayesian Optimization (BO) problem. Although there is existing work in this field, such work is rudimentary, and the authors develop a more general theory that works with generic $\varphi$-divergence-based ambiguity sets. The proposed algorithm works with familiar $\varphi$ functions, and furthermore, the authors derive closed-form expressions specifically for total variation, $\chi^2$, and KL-divergence functions. They then adopt existing Gaussian process-based BO solvers for the function evaluations in the reformulated expressions and derive a sublinear regret bound which differs from the existing BO regret bounds due to a new term that is the "price of distributional robustness". Strengths: I think the paper is clear and the reader is not getting distracted. The language is clear. There are no under/overpromises. The proofs are correct to my understanding. I believe the proof of Theorem 2 is sound. I also like the discussion after Theorem 1 on how the Variance term can be related to the existing DRO papers that relate distributional robustness to some variants of regularization. Weaknesses: The largest weakness in my view is the lack of a DRO literature review. The current literature review is based on BO, but regardless of what structure $f$ has (black-box or anything else really), there are thousands of papers out there, and especially $\varphi$-divergence is an overstudied topic. I find it hard to be convinced that Theorem 1 is useful or novel. Almost none of the papers reviewed in the "Distributionally Robust Convex Optimization" paper by Wiesemann, Kuhn, and Sim are cited. Moreover, half of the paper is on DRO, but I don't see a connection to BO. It looks like there are two separate fields, and the connection is light. Especially Algorithm 1, if I am not wrong, already standard in the BO literature and the contribution looks like the derivation of $\alpha(x)$. I have further suggestions and questions below. I am giving a slight acceptance decision conditional on a more thorough literature review in the rebuttal period that would potentially convince me and the readers. Except for that, I would like to thank the authors for this work and the clean paper. **Update:** The score is updated from 5 to 6. Please see the discussions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the abstract and the following, could the authors please elaborate on `sublinear regret' -- sublinear in what? There are contexts and ambiguity (hyper)parameters. - Page 2, citation 48 is missing brackets. - Page 2, "as one would expect": why? There are many studies showing it does not necessarily give complicated problems. - Page 2, "several baselines": could the authors please be more specific? - Page 2: I don't think the first contribution listed is new. It is already established that the inner problem can be dualized in the majority of the literature. - Whenever "performs best empirically" is mentioned, could the authors please specify if this is out-of-sample? - In general, "why distributional robustness" instead of robust optimization (e.g., in the GP setting one can also think $c$ additively perturbed) is not discussed. I see the relevance, but just a discussion could be useful. - Similarly, why $\phi$-divergences but not Wasserstein-balls? The former has some difficulties in real-life low-data settings due to the support constraints. - Section 3, "receives a context $c_t$": could the authors please clarify that this is independent of $x$? - Page 3, "full uncertainty information with any prediction": not clear - Footnote 2: please define which the "majority of considered examples" are. - Equation (3): firstly it is said that the interest of DRO is to "compute" the function, but isn't it to "optimize" it? There might be an $\inf$ missing here. I don't follow why the expectations are indexed by $q(c)$ but not $q$. Afterward, it is said to be intractable, but again, there are many traceability results under different assumptions -- this is not a thorough summary. - Please also define that $\mathbb{E}$ is over the empirical distribution; otherwise, the problems on page 4 do not have a meaning. - $p_t$ is the reference distribution: maybe state that it is the empirical distribution and give some consistency properties of this. - Page 4, $B_{\varphi}^t(p)$: is $p$ supposed to be $p_t$? - Theorem 1, "measurable": according to which measure? - Theorem 1: can you please add a discussion of whether there is anything used about BO (I think not)? If not, then please let the user know that this formulation still needs computation of $f$ and for this purpose, you will revise the UCB-related algorithms from the literature. - Page 5, "existing BO advancements": please cite. - Examples 1-2: are these simply replacing the conjugate or are there further steps? Would be great to clarify. - Example 2: Could the authors please comment on the computational complexity of this problem? - Example 1, "very easily implemented": this reads a little subjective here. Please try to formalize. - Page 6, "convenient in the theoretical analysis": perhaps it should be 'convenience'? - I would recommend having more discussion on the second term of Theorem 2 and try and bring some insights. - Minor: In the appendix, there is a part where there is "to to" twice. - Why are the KL divergence results pushed to the appendix? I would have thought for most people that could be the most interesting divergence. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are clearly addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and generally positive inclination of our paper regarding the clarity and novelty of our work. We apologize for the lack of references with respect to DRO and we will fix this. Indeed, Theorem 1 at a technical level may not be novel however its application Bayesian Optimization certainly is in this current form. We will cite all work from "Distributionally Robust Convex Optimization" and references therein. --- Question: In the abstract and the following, could the authors please elaborate on `sublinear regret' -- sublinear in what? There are contexts and ambiguity (hyper)parameters. Answer: Here, we are referring to sublinear with respect to the number of iterations T similar to Kirschener et. al, Thank you for bringing this to our attention as our bounds have several other parameters at play. --- Question: Page 2, "as one would expect": why? There are many studies showing it does not necessarily give complicated problems. Answer: Indeed, however we only mean it is complicated if we are solving the DRO problem in BO exactly and directly by tackling the minimax problem since one needs to minimize while also maximizing the objective - this could lead to an unstable solution. --- Question: In general, "why distributional robustness" instead of robust optimization (e.g., in the GP setting one can also think additively perturbed) is not discussed. I see the relevance, but just a discussion could be useful.-divergences but not Wasserstein-balls? The former has some difficulties in real-life low-data settings due to the support constraints. Answer: Yes this is a fair point. We focus on robustness at the distributional level and phi-divergences largely due to their smooth properties such that we can exploit Fenchel duality. --- Question: “is $p$ supposed to be $p_t$” Answer: Yes, this is a typo. --- Question “Theorem 1, "measurable": according to which measure?” Answer: This is respect to the Borel measure so measurability becomes quite a mild condition. --- Question: “Theorem 1: can you please add a discussion of whether there is anything used about BO (I think not)? If not, then please let the user know that this formulation still needs computation of f and for this purpose, you will revise the UCB-related algorithms from the literature. Answer: Indeed, there is nothing specific to BO here; however the results specific to UCB and BO in combination with Theorem 1 appear in our regret analysis in Theorem 2. Thank you for pointing this out, we can have more of a discussion there stating this fact. --- Question “Examples 1-2: are these simply replacing the conjugate or are there further steps? Would be great to clarify. Answer: Yes these are specifically replacing the conjugates and further simplifying the expressions. --- Question: Example 2: Could the authors please comment on the computational complexity of this problem? Answer: For the acquisition function, this is equivalent to Equation (5) where one needs to find the min and max values across all observed contexts, and therefore is linear in the number of observed contexts. --- Question: “Why are the KL divergence results pushed to the appendix? I would have thought for most people that could be the most interesting divergence.” Answer: While our result does apply to the KL divergence, we get readily available closed forms for chi-squared and total variation as seen by the second term which forms a regularization term, which the KL divergence does not reduce to. --- Thank you for your additional comments on improving clarity and presentation. We agree that it would be interesting to have a discussion around the regularization term and the effect it was. We will also pay more respect to the DRO results as you have stated, and hope that we have addressed your concerns! --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you very much for replying to my questions! When you say further literature review will follow, or state "we can have more of a discussion there stating this fact", do you mean by the camera-ready version? Could the authors relate the results more to the DRO literature (I was hoping to see some discussion during the rebuttal period)? I still find it hard to see the novelty from the DRO side. It is OK even if the novelty is specific to BO (not a general result + a solution algorithm dedicated to BO). Best regards. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 1hSk, Thank you for your response. When it comes to the relationship between our contributions and the DRO literature, while we use different proof techniques (such as Fenchel-duality), the results are not new for DRO. You are right that the novelty is specific to BO. From the perspective of BO however, we have several contributions: (1) We derive the acquisition function in the simple form. As the reviewer Yhe4 highlighted that this is an advantage since it becomes more amenable to execution. Our approach is general to the new types of divergences that are important. (2) We provide an adaptive expression for the acquisition function hyperparameter $\varepsilon$ leading to a hyperparameter "free" approach when there is no prior knowledge from the user about the suitable value of $\varepsilon$. (3) We derive regret bounds for DRO applied to BO with $\varphi$-divergences which are novel. We will state this fact that our work has limited novelty to DRO however focuses on BO in our camera ready version as you suggested.
null
null
null
null
null
null
Efficiently incorporating quintuple interactions into geometric deep learning force fields
Accept (poster)
Summary: The paper introduces a new method for molecular modeling, QuinNet, which incorporates five-body interactions using only dihedral angles. The authors first introduce relevant concepts related to machine learning force fields and related work in the field related to a variety of equivariant models. Next, the paper describes pertinent definitions of force fields, group equivariance, and methods for calculating empirical force fields. In the methods section, the authors describe their approach for integrating five-body terms into the architecture of QuiNet using only dihedral angles and incorporating model designs from prior work (PaiNN for 3-body interactions, ViSNet for 4-body interactions) and new definitions for different topologies of 5-body interactions. In addition to the architectural description, the authors provide relevant mathematical formulations and a complexity analysis. In their results, the authors showcase QuiNets performance on a low (MD17) and high complexity (MD-22) dataset in terms of energy and force modeling, including an ablation for different body terms in Figure 5. Strengths: The paper has the following strengths: * Originality: The proposed architecture incorporates relevant terms for molecular modeling that are physically relevant, but have not been incorporated before. * Quality: The method and experimental design showcase relevant cases for applying GNN models for molecular modeling with the idea behind the architecture being well-motivated. * Clarity: The paper presents a cohesive formulation of their method, both in figures and mathematics, and experiment descriptions with relevant takeaways. * Significance: The proposed architecture shows improved modeling performance, especially in forces, and provides a potential framework for incorporating physical interactions into GNNs. Weaknesses: The paper could be improved by the following: * Providing a clear and concise discussion of limitations. [Quality, Significance] * Adding more context for the results in Figure 4. The MD simulations are only briefly described in Section 5.1, which is on a different page then the figure and easy to miss. [Clarity] * A description of the case in which a greater set of many-body interactions is beneficial. This is briefly mentioned in the discussion between MD17 and MD22, but it would be good to put in greater context in terms of the experimental results and could serve as part of the conclusion. [Clarity] Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Could you provide additional details on the limitations of QuinNet? E.g. Is it limited to modeling mainly molecular systems? What sizes of molecules do you think QuinNet can be effective in and why? * Do you have data that supports your compute complexity analysis compared to other methods? If so, what kind of speedup do you generally find, if any? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not provide a discussion on limitations, which I raised as a weakness. I would like to see a discussion of limitations in future versions and/or during the discussion period. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments and will address each point in our response accordingly. ### Weakness 1: * As the experimental results indicate, five-body interactions do not have a substantial impact on small molecules. To demonstrate the significance of these interactions, it is essential to examine larger molecular systems. ### Weakeness 2: * We apologize for not providing detailed information on the MD settings. Simulations were conducted for each model/molecule, covering a duration of 300 ps and starting from the initial frame configurations. With a 0.5 fs time step and a maintained temperature of 500 K, the simulations were controlled using a Nosé-Hoover thermostat. The distribution of interatomic distances, h(r), was computed as the ensemble average of distance statistics within the trajectories. We will include these relevant details in the Supplementary Materials section. ### Weakness 3: * As evidenced by the experimental results, higher-order many-body interactions become more remarkable for larger molecules. We appreciate the reviewer's suggestion and will incorporate additional discussion on this topic in the conclusion section. ### Questions 1: * As previously mentioned in response to weakness 1, our experimental results indicate that five-body interactions do not have a significant impact on small molecules. To demonstrate the importance of five-body interactions, larger molecular systems should be investigated. While our current experiments primarily focus on molecular systems, we plan to extend our methods to more complex systems, such as periodic structures. As we have stated, to highlight the significance of five-body interactions, larger systems must be examined, with the largest system in our experiments containing 370 atoms. The results show an improvement when incorporating five-body interactions. Based on the findings from the MD22 and Chignolin dataset, molecules with nearly 100 atoms exhibit improvements when five-body interactions are included. ### Questions 2: * Time Complexity section in the official comment illustrates the complexity of calculating related physical quantities explicitly, as well as the calculations performed in QuinNet. The inference time and model parameters are further elaborated in the time complexity section of the official comment. We will expand our discussion on complexity, considering both theoretical and practical system perspectives. Additionally, we will include details regarding the inference time and memory usage in the manuscript. --- Rebuttal Comment 1.1: Title: Thank you for additional details Comment: Thank you for providing additional details in the rebuttal. I think that most of my questions and concerns have been addressed. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: We are glad to know that our response is satisfactory to you. We plan to include all the new experimental results in the manuscript as soon as we are permitted a polish for the final version.
Summary: In this work, the authors propose to incorporate features from five-body interaction into machine-learning force field models and develop QuinNet. To efficiently incorporate such high-order information, the authors are motivated by the topology of many-body interactions and design sophisticated components. Experiments on several benchmarks are conducted to demonstrate the performance of QuinNet. Strengths: 1. The target problem of this paper, the development of machine learning force field models, is of great significance. Weaknesses: 1. **The motivation for the designed components of many-body interaction is puzzling**. As introduced in Section 4, the development of four-body interaction (improper torsions) and five-body interactions are based on the topology. First, such analysis is purely qualitative. The authors did not provide further completeness proof or quantitative evidence about these interaction schemes in real-world data. Second, the reasons for deriving Eq (4)-(9) are not well explained. It is suggested to clarify how these components are motivated according to the topology analysis. 2. **On the experimental evaluation**. Additionally, there are several aspects of the experiments that are concerned: - The empirical performance is not consistently better than other baselines. Among the evaluated benchmarks, the proposed QuinNet cannot outperform the baselines significantly. For example, in MD17, the newly developed five-body interaction modules do not significantly improve performance. In rMD17, the best performance is diversely distributed among the compared models. Overall, the experimental evaluation does not well demonstrate the power of newly developed modules. - The computation efficiency evaluation is missing. Although the authors provide complexity analysis, it is better to further show the time/memory cost comparison between the proposed QuinNet and baselines. Besides, the model parameters should also be provided for all compared models. - The scale of the chosen benchmarks is rather small. Both the dataset size and sample size (number of atoms) are limited. It is suggested to further evaluate the proposed QuinNet on large-scale benchmarks, e.g., Open Catalyst Project [1]. - The ablation study. First, as shown in Figure 5, the inclusion of Five-body@I even induces further errors, which would make readers curious about whether such a phenomenon generally exists. Second, as introduced in VisNet, the improper angle was also considered. The authors should add further discussions and empirical comparisons between it and the newly proposed four-body interaction (improper torsion). 3. **The writing does not meet the requirement of an acceptable paper in this conference**. First, Section 3.2 can be thoroughly extended (e.g., in the appendix) to introduce the background of force fields and highlight the importance of torsion potential, improper torsions, and higher-order many-body interactions. Second, there lack of formal descriptions of QuinNet. Figure 3 can hardly be understood by readers that are not familiar with the related works in this area. [1] Chanussot L, Das A, Goyal S, et al. Open catalyst 2020 (OC20) dataset and community challenges[J]. Acs Catalysis, 2021, 11(10): 6059-6072. - Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the Weakness section to address the concerns. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors did not discuss the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments and will address each point in our response accordingly. ### Weakness 1: * In our experiments, we employ Chignolin, a protein system [1,2], which offers quantitative evidence regarding the impact of five-body interactions, aligning with the conclusions in Ref [3]. As stated in Ref [4], "it is unclear how many descriptor elements are actually needed in order to make the descriptor complete and thus able to uniquely specify an atomic environment of the N neighbors." Moreover, from the perspective of "Many-Body Expansion" theory [5], representing the energy of an entire system in a hierarchical level requires including all many-body interactions for completeness. [1] van der Spoel, David, and M. Marvin Seibert. "Protein folding kinetics and thermodynamics from atomistic simulations." Physical review letters 96.23 (2006): 238102. [2] Satoh, Daisuke, et al. "Folding free‐energy landscape of a 10‐residue mini‐protein, chignolin." FEBS letters 580.14 (2006): 3422-3426. [3] Wang, Jiang, et al. "Multi-body effects in a coarse-grained protein force field." The Journal of Chemical Physics 154.16 (2021). [4] Bartók A P, Kondor R, Csányi G. On representing chemical environments[J]. Physical Review B, 2013, 87(18): 184115. [5] Collins M A, Bettens R P A. Energy-based molecular fragmentation methods[J]. Chemical reviews, 2015, 115(12): 5607-5642. * Thanks for your suggestion. As illustrated in Figure 2 of the manuscript, three-body interactions can be represented as angles (Fig. 2a). To efficiently calculate the physical quantity, we can adopt the method used in PaiNN. Similarly, the approach employed in ViSNet can be utilized to compute dihedral angles by calculating the normal vectors of planes $ijk_1$ and $ijk_2$, and the inner product of these two normal vectors will yield the torsion angle (Fig. 2b). For improper angles, we can first compute the normal vector of the plane $ij_1j_2$, then find the inner product of $ij_3$ and the normal vector (Fig. 2c). Five-body interactions follow a similar process, involving the calculation of normal vectors for planes $ij_1j_2$ and $ij_3j_4$ in Fig. 2d; planes $ij_1k_1$ and $ij_2k_2$ in Fig. 2e; and planes $ik_1k_2$ and $ijk_3$ in Fig. 2f. The inner product of these two normal vectors will then provide the dihedral angle. We will include more details in the manuscript to better explain our methodology. ### Weakness 2: * MD17 and revised MD17 are small molecular datasets in which the influence of five-body interactions is relatively minor. Nevertheless, five-body interactions have been demonstrated to be crucial in various scenarios, such as replicating specific phenomena in protein systems. Consequently, QuinNet is not only compatible with other state-of-the-art models on MD17 and rDM17 benchmarks without any sacrifices but also outperforms numerous leading models on larger molecular systems, such as the MD22 and Chignolin datasets. * Time Complexity section in the official comment showcases the complexity of explicitly calculating the relevant physical quantities, as well as the calculations performed in QuinNet. Additionally, the inference time and model parameters are presented in the Time Complexity section of the official comment. * As highlighted in the introduction section, to demonstrate the significance of five-body interactions, it is essential to test large-scale systems. The mean system size in OC20 is 77.75 (as reported in ComENet), whereas the size of supramolecules in MD22 ranges from 42 to 370, and the size of the Chignolin protein is 166. Experimental results indicate that five-body interactions have a more substantial impact on larger molecules than on smaller ones, such as those in the MD17 and rMD17 datasets. Furthermore, in response to other reviewers' suggestions, we have included the performance results for the QM9 dataset in Table 3 of the official comment section to demonstrate our model's effectiveness with varying dataset sizes. Due to time constraints, the experiments are not yet fully converged. Therefore, the table presents the current results, and we will update it once the experiments are completed. We will also incorporate these results into the manuscript. * First, we arrange the 5-body interactions as depicted in Figure 2(d)-(e) since 5-body interactions@I encompass a portion of 4-body (improper) interactions and 5-body interactions@III include a part of 6-body interactions. According to our ablation study, when incorporating the 5-body@I component, the energy errors increase, whereas the force errors remain comparable to the results of 4-body (improper) interactions. Although 5-body interactions@I can partially represent the improper term with dihedral angles, using dihedral angles for this purpose may not be suitable, as the descriptions for the improper term in empirical force fields shown in Figure 1 typically involve height or angles. These inappropriate pieces of information may potentially damage performance to some extent. Second, it should be noted that ViSNet incorporates the improper angle in its latest version on arXiv, which was released after the NeurIPS submission deadline and has not yet undergone peer review. Consequently, the related discussion was not included. ### Weakness 3: * We will thoroughly revise the paper to improve its clarity and readability. The revisions will include: * Providing a more detailed introduction to the background, which will help readers who are not familiar with the field to gain a better understanding of the concepts. * Offering a more comprehensive context description for Figure 3, ensuring that the information presented is clear and easily interpretable.
Summary: This paper aims to incorporate 5-body interactions into geometric deep learning models. They first analyze the topology of 5-body interactions and identify three 5-body angles. Then they propose an efficient way to incorporate these 5-body information into models. The complexity of the proposed QuinNet is still O(|N|), the same as many previous 2-body methods like PaiNN. The results are comparable to previous SOTA methods. Strengths: This paper is well-written and easy to follow. The experimental results show that the proposed method can perform well on most tasks. The ablation study in Section 5.4 and Figure 5 show that the proposed 5-body information indeed helps to model. Weaknesses: See details in the Question part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. About the motivation: this paper aims to incorporate 5-body interactions into geometric deep learning models. However, based on my understanding, using up to 4-body (torsions) interaction is already complete [1][2] in terms of capturing the geometric structures. If this is correct, then why do we need these 5-body angles? In addition, if we can incorporate 5-body interactions, do we also need to incorporate 6-body interactions? 2. About the complexity: in Section 4.3, the authors claim that the complexity is O(|N|), as efficient as many 2-body methods like SchNet and PaiNN. But I think this complexity is not well explained. Using pseudocode/algorithm may be better to analyze the complexity. In addition to the analysis, I suggest the authors use some results to empirically verify the great efficiency compared to other baseline methods, e.g. the inference time, used memory, etc. 3. About the tasks: this paper focuses on MLFFs, how about other molecular property prediction tasks, such as QM9 and OC20? I am wondering if this method is specially designed for MLFFs, or can be used on all 3D molecule tasks. In other words, why do the authors emphasize MLFFs? Is there any significant difference between MLFFs and other molecule property prediction tasks? 4. Other related papers: many-body [3], MLFFs [4] 5. The j, k in Figure 2 are confusing to me. For example, in (f), why not be i, j1, j2, j3, and k1? [1] ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs. [2] GemNet: Universal Directional Graph Neural Networks for Molecules. [3] On the Expressive Power of Geometric Graph Neural Networks. [4] Forces are not Enough: Benchmark and Critical Evaluation for Machine Learning Force Fields with Molecular Simulations. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments and will address each point in our response accordingly. ### Questions 1: * **4-body interactions are not complete for modeling molecular interactions.** As stated in Ref [1], "it is unclear how many descriptor elements are actually needed in order to make the descriptor complete and thus able to uniquely specify an atomic environment of the N neighbors." Moreover, from the perspective of "Many-Body Expansion" theory [2], representing the energy of an entire system in a hierarchical level requires including all many-body interactions for completeness. ComENet demonstrates its geometric completeness for a strongly connected 3D graph, proving that torsion angles are indeed adequate for capturing geometric structures. However, determining the identical nature of two 3D graphs does not guarantee the ability to model molecular interactions accurately. Furthermore, while incorporating 5-body interactions may be over-complete for capturing geometric structures, our results indicate their effectiveness with only a minor increase in complexity. The inclusion of 6-body interactions could potentially lead to increased computational costs if deemed necessary. Nevertheless, due to the computational complexity and the challenge of identifying an appropriate physical quantity to describe many-body interactions, this area warrants further investigation. We hope our work can offer valuable insights for future studies. [1] Bartók A P, Kondor R, Csányi G. On representing chemical environments[J]. Physical Review B, 2013, 87(18): 184115. [2] Collins M A, Bettens R P A. Energy-based molecular fragmentation methods[J]. Chemical reviews, 2015, 115(12): 5607-5642. ### Questions 2: * Time Complexity section presented in the official comment part, illustrates the complexity of calculating relevant physical quantities explicitly and within the QuinNet framework. Additionally, the inference time and model parameters are detailed in the Time Complexity section of the official comment part. ### Questions 3: * Machine learning force fields are indeed an important scientific problem. Our network has the potential to naturally extend to molecular property prediction tasks. In response to your suggestion, we have conducted experiments on the QM9 dataset, and the preliminary results are displayed in Table 3 of the official comment. Due to time constraints, the experiments have not yet converged. Therefore, the table presents the current results, and we will update it once the experiments are completed. We will also incorporate these results into the manuscript. In comparison to molecular property prediction tasks, which typically predict a single value, machine learning force fields demand a more significant trade-off between accuracy and efficiency. This is the rationale behind designing our network with a complexity of $O(|N|)$. ### Questions 4: * We will add them into references. ### Questions 5: * $j$ and $k$ represent the neighbor atoms of $i$ and $j$, respectively. We will revise the figures and symbols to enhance the clarity of the manuscript.
Summary: This paper introduces a machine learning force field that is a neural network with explicit interactions for up to 5-body terms. The authors evaluate the model on a couple of public datasets and show demonstrate the competence or superiority of this new model compared to the state of the art in this field. Strengths: The paper provides an important addition to a series of ever-improving machine learning potentials. The contribution is clear and simple to understand at the high level, though the details are often unclear. The benchmarks were compared against a set of reasonably strong published methods in this area. In my opinion, if this work was presented in an unambiguously clear fashion and accompanied by code, it could be a strong contribution to this conference. [The paper improved significantly following the first round of feedback from reviewers, so I'm raising my rating to a 7.] Weaknesses: The complexity analysis is very limited. How many total interactions did the typical molecule have as a function of their atoms, and how did the practical experimental complexity scale for the evaluation of these molecules. One of the main reasons that 5-body terms were not used in traditional MD simulations was the poor scaling of the number of interactions one would need to calculate. The MD simulation mentioned in section 5.1 and Fig 4 are not described anywhere. The following sentences suggest that there would be some explanations in the supplement, but I couldn't find them: "Additionally, we perform MD simulations using trained QuinNets as force fields and plot the distribution of interatomic distances h(r) for these 7 molecules in Fig. 4. Further details regarding additional settings can be found in the Supplementary Materials." These sentences in the supplement, page2, are confusing or wrong: "Similarly, five-body@III interaction (Fig. S1 (c)) is a special case of six-body interaction when nodes i and k4 in Fig. S1 (d) superpose each other. Thus, the QuinNet model captures all five-body interactions and a portion of six-body interactions, making it a versatile and comprehensive tool for modeling complex molecular systems." There is no six-body interaction if two of the bodies are the same, and there is no physically acceptable case where two different atoms could superpose each other. The code is not provided, so it is not possible for me to assess the reproducibility of this method. The diagram in Figure 3 seems reasonable at the very high level, but it lacks the definitions of most of the terms annotated in the figure, thus rendering it confusing. (What is $Y_l$? is it the set of all spherical harmonics $Y_{lm}$ for a given angular momentum $l$? What is $n_j$? What is $s_j$? $W$?...) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors add the presentation of the QM9 quantities estimated in the recent publication for Allegro? (https://www.nature.com/articles/s41467-023-36329-y Table 3) How long and how stable were the actual MD simulations? What were the exact codes/protocols used? What is the practical performance of the model during evaluation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impacts from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments and will address each point in our response accordingly. ### Weakness 1: * We address your concern through both theoretical and practical analyses. In the official comment part, we present the time complexity analysis and comparisons of inference time and model parameters. In empirical force fields, interactions are computed explicitly using physical quantities. For instance, two-body interactions are represented as bonds, with the order of the number of two-body interactions being $O(NN_b)$, where $N$ and $N_b$ denote the number of atoms and the number of neighbors, respectively. Three-body interactions are represented as angles, with an order of $O(NN_b^2)$. Four-body and five-body interactions are depicted as dihedral angles, with complexity orders of $O(NN_b^3)$ and $O(NN_b^4)$, respectively. Taking aspirin as an example, and setting the radius cutoff at 5Å, the number of two-body interactions, three-body interactions, four-body interactions (torsion), four-body interactions (improper torsion), five-body interactions@I, five-body interactions@II, and five-body interactions@III are 306, 2202, 38553, 10369, 35635, 628197, and 517155, respectively. **The number of five-body interactions is significantly greater than that of other many-body interactions.** This is why 5-body terms are typically not used in traditional MD simulations. However, in some cases, five-body interactions play a crucial role in various fields such as coarse-grained protein force fields, organic molecules, crystal vibrations, and electrostatic interaction potentials. The method introduced in QuinNet may provide an efficient solution in these cases. ### Weakness 2: * We apologize for the lack of detail in the Supplementary Materials. Simulations were carried out for each model and molecule, starting from the first frame configurations and spanning 300 ps. A 0.5 fs time step was employed, and the temperature was maintained at 500 K using a Nosé-Hoover thermostat. The distribution of interatomic distances, h(r), was computed as the ensemble average of distance statistics within the trajectories. The relevant descriptions will be defeinitely added to the Supplementary Materials. ### Weakness 3: * As shown in Equation 9, there are six indices, i.e., $i$, $j$, $j_1$, $j_2$, $k_1$, and $k_2$, where $j, j_1, j_2\in \mathcal{N}_i$ and $k_1, k_2 \in \mathcal{N}_j$. Therefore, Equation 9 describes six-body interactions. However, since $i \in \mathcal{N}_j$, $k_1$ or $k_2$ might be the same index as $i$, causing the equation to describe 5-body interactions, as there will be only five indices in the equation. This is why we state that the five-body@III interaction is a special case of six-body interaction. We will modify the sentence as "Similarly, five-body@III interaction (Fig. S1 (c)) is a special case of six-body interaction (Fig. S1 (d)) when the index i and k4 are the same in the Equation 9. Thus, the QuinNet model captures all five-body interactions and a portion of six-body interactions, making it a versatile and comprehensive tool for modeling complex molecular systems." ### Weakness 4: * The code will be released upon the paper's acceptance. To avoid confusion, we will add descriptions for the notations in the caption of Figure 3. $Y_l$ is indeed the set of $Y_{lm}$. $n_j$ represents the normal vector associated with node $j$, and $s_j$ denotes the scalar embedding of node $j$. $W$ refers to the weights in a linear layer, while $b$ indicates the bias. ### Questions 1: * We have conducted benchmarks on the QM9 dataset, and the preliminary results (Table 3) can be found in the official comment part. Due to time constraints, the experiments have not yet fully converged. Therefore, the table presents our current results, and we will update it once the experiments are completed. Additionally, we will include these results in the manuscript. ### Questions 2: * The MD simulations for molecules in the MD17 dataset were performed over 300 ps, and all seven trajectories exhibited stability. As shown in Refs. [1] and [2], simulations using most of the models remain stable for the MD17 dataset. Moreover, the code for the simulation was implemented using the ASE Python package and adapted from Ref. [1]. [1] Fu X, Wu Z, Wang W, et al. Forces are not Enough: Benchmark and Critical Evaluation for Machine Learning Force Fields with Molecular Simulations[J]. Transactions on Machine Learning Research, 2023. [2] Wang Z, Wu H, Sun L, et al. Improving machine learning force fields for molecular dynamics simulations with fine-grained force metrics[J]. The Journal of Chemical Physics, 2023, 159(3). ### Questions 3: * In terms of practical performance, we conducted MD simulations for seven molecules in the MD17 dataset over 300 ps, and all trajectories exhibited full stability. Regarding accuracy, the distributions of interatomic distances h(r) calculated using the trained QuinNet model closely matched the results obtained from DFT calculations. As for efficiency, we have included additional analysis (Time complexity section) in the official comment part to demonstrate the performance of the QuinNet model. --- Rebuttal Comment 1.1: Title: Thanks for the response. Comment: I have read the responses and appreciate the additional effort by the authors. The response to weakness 3 is confusing: would one consider that a 2 body term captures a part of the three body terms or a 3-body term a part of the 4-body term? (these are all clearly orthogonal considerations and can vary independently; moreover there is never a case when two particles are on top of each other.) I would urge the authors to put some additional effort into clarifying this sentence or dropping this point, however, I don’t feel strongly about it. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thank you for your comments. In Weakness 3, our primary emphasis was on analysis from the formula's perspective, rather than from the actual system. As a result, there would be no unphysical situations involving overlapping atoms. Nevertheless, we acknowledge the reviewer's concerns, and recognize that the description in this section may create confusion for readers. Therefore, we opt to remove this point per your suggestion. We sincerely appreciate your valuable feedback, which significantly enhances the quality of our article.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online Control for Meta-optimization
Accept (spotlight)
Summary: This paper studied the problem of meta-optimization through the perspectives of online control. In this paper, the meta-optimization problem is thought as a sequence of episodic optimization problem and the performance is measured by the regret against the best static optimizer belonging to a convex class. By re-writing the gradient descent update rule into a LTV system with non-stochastic disturbance, the author presented a control-motivated meta-optimization algorithm can achieve nearly-optimal optimization performance in hindsight. The main theorem shows that the algorithm proposed by this paper can achieve sub-linear regret for both quadratic and convex, smooth costs. Strengths: I think the approach taken by this paper is quite clever in that it re-write the extremely difficult, highly non-convex meta-optimization problem into a much more tractable online learning problem. The idea seems quite general and I encourage the authors to extend this work to meta-optimize more sophisticated algorithms beyond standard gradient descent (e.g. momentum or Adam). The use of regret to benchmark the performance of meta-optimization also seems to be a good idea, as it balances the desire for fast convergence and quality for the final solutions. Given this is a ML venue, this paper did a good job with introducing the relevant notations and terminologies from control. Weaknesses: While I think this paper has a lot of potential, I regrettably cannot given a strong rating because I feel the presentation is overall quite lacking. 1. I do not agree with the author's claim that their setting of meta-optimization generalizes hyper-parameter optimization. While, the algorithm in this paper can yield comparable performance to the best hyper-parameters in hindsight, I struggle to find how a good set of hyper-parameters can be extracted this algorithm. 2. This paper seems a bit carried away with the control perspective and fails to give concrete examples how the results can be applied to practical learning problems. Although an example is given in Appendix I, it is rather simplistic. And the lack of source code makes it hard for the readers to understand how the algorithms are implemented. 3. Some of the notations are not very well-specified. a) the disturbance $w_{t, i}$ is not explicitly defined, is it $\nabla f_{t, i}(x_{t-1} i)$ as given in eq. (3)? b) in Theorem 2, what is the function class $\Pi$, is it related to $\Pi_{DFC}$? 4. Regarding the equation between lines 196 and 197, isn't it a second order approximation of gradient descent? This simplifying assumption should be discussed more thoroughly. 5. Algorithm 2 should be more self-contained, i.e., eq. (7) should be in the body. Also I would love to see more description on this algorithm. 6. Perhaps my greatest complaint. The experiments are very simple and does not reflect any realistic scenarios where meta-optimization are helpful. Linear regression and MNIST are both too small that vanilla SGD can usually get very good performance already. Also, I very much don't like the decision to move almost all of the experimental details into the appendix. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: See my concerns above. I am open to raising my rating significantly if author can either prove me wrong or offer fixes to these issues. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: This paper is mostly theoretical and addresses fundamental problems in optimization for ML. So, I do not envision any concerns over ethics. However, the authors absolutely should include source code for their experiments, especially since their proposed algorithm is quite involved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work and the valuable feedback. We first address the point of simple experiments, and then address other comments one by one below. **please also see pdf for all reviewers, as it contains experiments** We would like to emphasize that this is mainly a theoretical paper, and in our opinion this work has sufficient novelty to stand alone as a conceptual and theoretical contribution. To elaborate on the difficulty of getting this result, and the techniques from different subfields that we needed to incorporate to circumvent known NP-hardness results: we change the formulation of the problem and incorporate convex relaxation (point 1 below), we give a new dynamical system formulation of optimization (which is different from all previous formulations), and we use new techniques from online control to solve this dynamical systems formulation (which have not been used in optimization before). Notice our method can also compete with gradient descent with momentum, and preconditioned methods, which is a strength you wanted to see in future work (and is already accomplished in this paper). To support the theory, we provide proof-of-concept experiments, rather than extensive evaluations as in applied papers. The existing experiments are consistent with the convex regime studied in the paper, and we include an additional experiment on neural networks in the general response. Even though SGD can solve these problems, it still requires hyperparameter tuning to achieve the best performance, and as shown experimentally, meta-optimization has improved performance over time. Extension to non-convex meta-optimization and developing practical algorithms suitable for ultra large-scale optimization is important future work. These experimental results also demonstrate that there is potential for this new approach to be applied to a broader set of problems, including nonconvex optimization with large-scale deep neural networks. We respond to your other comments below: 1. Directly optimizing over the hyperparameters is a non-convex problem in general, and non-convex problems are NP-hard in the worst case. We use an improper learning technique, meaning that we do not directly manipulate the hyperparameters but the optimization iterates themselves. This improper learning technique is the result of a convex relaxation, and this combination of techniques to tackle non-convex problems is commonly used in the theoretical machine learning community. A prominent example is the LASSO algorithm: it relaxes the sparsity constraint to an $\ell_1$ constraint. This circumvents the NP-hard problem of recovering the sparse coefficients, and allows for equal expressive power in prediction/classification. (Note there is a large literature on statistical assumptions that allow for recovery of the sparse coefficients, we do not refer to these). Other examples include: online nonstochastic control (which we make use of), trace norm relaxation for matrix completion, and many more. The notion of competing with the best-in-hindsight in terms of performance, a.k.a. regret minimization, is also a widely adopted paradigm. Notably, adaptive gradient methods (starting from AdaGrad) were developed in this context. For hyperparameter optimization, our approach which makes use of convex relaxation and regret minimization, improves upon grid search of hyperparameters in terms of the iterates. In particular, if we can compete with the best optimizer in hindsight, the optimization iterates will converge to the optimum faster over many episodes, compared to grid search. Lastly, the relationship between the best disturbance-response controller and the optimal state feedback controller, which directly encodes the hyperparameters, is an interesting research topic. Indeed, [1] shows that in the LQR setting under certain conditions, parameters of the disturbance-response controllers are good approximations of the optimal feedback controller. 2. We describe the control approach in detail because it is the foundation for our algorithm. The example in Appendix I is simple to illustrate the theoretical approach. In appendix H, we give examples of optimization algorithms that we can compete against, which include essentially gradient descent with momentum, and preconditioning methods. We plan to open source our code in the coming weeks and make our method more widely available as soon as we can. 3. Thank you for pointing them out, we will clarify in the revision. (a) Yes, $w_{t, i}$ is as defined in (3) during an episode, and as in (4) at the end of an episode. (b) The algorithm class $\Pi$ and its relationship to $\Pi_{DFC}$ is explained in section 3.2. 4. This equation represents the relationship between the previous gradient and the current gradient. In the convex quadratic case, the $H_{t, i}$ matrix is the Hessian; in the convex smooth case, there always exists such a linear transformation and we consider a particular one, as stated in the appendix. The linear transformation is given by second-order information of the function at different points between the previous and current iterate. 5. Thank you for the suggestion, we will include the ideal cost and more exposition on the algorithm in the main body. We have also commented on the intuition behind the ideal cost in the response to reviewer yXWX. [1] On the Relationship of Optimal State Feedback and Disturbance Response Controllers We hope our response answers your questions and that you consider raising your score. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am happy that you have addressed much of my concerns with the motivation of this paper. Originally, I thought your goal is to do hyper-parameter tuning, but now I understand that this paper is about the application of online learning to a more general notion of meta-optimization. Regarding with your responses: 1. I am aware of the literature you pointed out. I think a better angle of justifying your algorithm is by considering problems with time-varying optimization objective. Please correct me if you think my intuition is wrong -- I think the proposed algorithm is a very neat way to "adapt" to a sequence of evolving learning tasks given to the learner, and the regret guarantee would be highly relevant in such a scenario. I think that the emphasis on hyper-parameter tuning may mislead the readers. 2. I am not happy with this response. From my understanding, you directly transplanted online learning/control methods to solve a learning problem. But this paper does not justify why and how such formulation is applicable or natural for a learning problem and explain that the terminologies are not artifact of limitations in control. In fact, I am now convinced that this work is very strong from a theoretical perspective, but the motivations are still somewhat weak. The answer I am looking for here is more discussions why this paper's formulation is not overfitting to existing methods in online learning/control. 3. a) thanks, b) can you just directly tell me what is the compactor class $\Pi$ used in Theorem 3.2? I still cannot find it. 4. I see. But I am now a little confused by how does your algorithm compute the $A$ matrices when the cost is a general smooth function. 5. I like your explanation of the ideal cost. It would be lovely to see this discussion being added to the main body. Lastly, the experiments are still kind of simple, but I appreciate the efforts into addressing my concerns. Overall, I am convinced of this paper's technical contributions. Nevertheless, I still feel that the discussions leading up to the main results could be made more clear. I will raise my score to **5** and would be inclined to make further changes if my remaining concerns could be answered this week. --- Reply to Comment 1.1.1: Comment: Thank you for your fast response, engagement, and openness. We address your comments point by point below. 1. We chose hyperparameter optimization as an example that is important and everyone is familiar with, but we agree that we consider a much more general setting. We will emphasize that in the final version. 2. We are happy to elaborate on this issue, since indeed, this touches upon the very heart of our approach and problem we tackle. The history of control methods in mathematical optimization is vast, and we give a survey in Appendix A and B. In fact, some of the most well known methods in optimization are directly motivated by intuition from dynamical systems; for example, the Polyak “heavy ball” method, which inspired momentum, is influenced by Newtonian physics as its very name suggests. The standard methodology that connects optimization and control is Lyapunov’s stability theory. Deriving the convergence of various optimization algorithms using Lyapunov stability analysis is surveyed in Wilson’s recent thesis [1]. However, this approach is insufficient for our purpose, because of two reasons: - The problem we address, of competing with the best algorithm, is nonconvex. Thus, stability arguments would imply convergence to a local optimum, which would not give the desired outcome. That is why we resort to a different control formulation stemming from optimal control, and the use of new techniques in online control based on convex relaxation. These techniques guarantee competitiveness with the global optimum (as we explained in the LASSO example for the previous point). - Lyapunov’s stability analysis can show the stability of a given system (correspondingly the convergence of a given optimization algorithm). However, our goal is to design meta-optimization algorithms that can compete with the best method in hindsight, which goes beyond the scope of analyzing convergence of existing algorithms. We will clarify these motivations in the revision. 3. b) Yes of course. The characterization is the equation below Line 290: basically the policy class amounts to linear functions of past gradients. As explained in Lines 291 to 294, for deterministic meta-optimization, it captures gradient descent, momentum, and preconditioning methods on pseudo-gradients. 4. This is an excellent question. The matrices A exist, as we prove, but we do not need to compute them. The algorithm applies a linear policy to the disturbances, which we can compute (because the disturbances are given by the gradients), and does not make use of A. The existence of a linear dynamical formulation is important for the analysis, but not for the algorithm. Not having access to A implies that we cannot obtain the full gradients for updating the M matrices, but we can estimate these gradients using zero-order methods, as we detail in Appendix E. 5. We will be happy to add this! [1] Lyapunov Arguments in Optimization. Wilson, Ashia, 2018. We hope this answers your questions and you are open to further raising the score.
Summary: This paper proposes a framework for optimization whose goal is to learn the best optimization algorithm from experience, and gives an algorithmic methodology using feedback control for this meta-optimization problem. The authors derive new efficient algorithms for meta-optimization using recently proposed control methods, and prove sublinear meta-regret bounds for quadratic and convex smooth losses. The approach leverages convex relaxation techniques in the recently-proposed nonstochastic control framework to overcome the challenge of nonconvexity. Consequently, it enables us to learn a method that attains convergence comparable to that of the best optimization method in hindsight from a class of methods. Strengths: This paper introduces a novel approach based on control theory for the task of meta-optimization – online learning of the best optimization algorithm. It provides a novel control formulation for meta-optimization based on the recently proposed framework of online nonstochastic control. Consequently, a new metric for meta optimization called meta-regret is proposed, which measures the total cost compared to the cost of the best algorithm in hindsight in meta-optimization. FurthermoreMoreover, this is a theoretical paper of high technical quality. It derives efficient algorithms for meta-optimization based on recently proposed Gradient Perturbation Controller (GPC), a new type of online feedback-loop controller. Furthermore, this paper provides the analysis and proof of the tight sublinear regret bounds for quadratic and convex smooth objective functions. Weaknesses: This paper is well organized and clearly written in general. However, in algorithm 2 (Line 9), Eq. (7) is missing in the main paper, although we may find this Eq. in Appendix. This is mainly a theory paper, in section 4, it provides an proof of concept on experiments with quadratic and convex smooth losses. We are curious to investigate the empirical impact of meta-optimization on board applications. Therefore, it might strengthen the impact if the authors could provide the experiment on non-convex loss, for example, a classification task using simple neural network. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This is a very solid theory paper. However, in the main paper, the experiment part is limited. Can we move Logistic regression for MNIST classification experiment from Appendix and add to main paper? In Figure 1, for y-axis, is it the moving average of objective values on training data or validation data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no code provided to reproduce the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work and their valuable feedback. We are happy to address the weaknesses: 1. We will provide explanations and intuitions for the ideal cost in the main paper, for clarity and completeness. 2. We have an additional experiment on MNIST classification with a neural network. Comments and details on the experiment are in the top-level response. 3. We definitely plan to open source our implementation in the coming weeks, as soon as we can. For the questions: 1. We can certainly move the logistic regression experiment to the main paper. 2. In Figure 1, the objective values are on the training data, since we study optimization instead of generalization. --- Rebuttal Comment 1.1: Title: Thanks to the authors for the rebuttal Comment: I’ve read comments from all the other reviewers. Thank you for your rebuttal, and I appreciate that my concerns have been addressed.
Summary: This paper proposes an online control method to meta optimization problem. The authors formulate the problem into a robust control problem and leverages non-stochastic control framework to achieve convex relaxation. The regret guarantees are derived theoretically and experiments show its superiority compared with the best optimizers in hindsight. Strengths: The main text is very concise in presenting the main technical discovery of the paper, as well as necessary background discussion and experiment results. The appendix provides very comprehensive introduction of technical background, proof of theorems and more interesting experiment results. This paper is overall well written and solid in theory. Weaknesses: I personally think the experiment presented in the main paper can be replaced by a more complex one. For example, could you consider experiments with neural network training using Adams, which will be more interesting to modern AI field. Or any other complex experiments. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In line 7 of Algorithm 2, how is the gradient information calculated. If it is calculated through approximation (e.g., finite-difference), how will the approximation error affect the final outcomes? Is the meta regret of the proposed method always positive? Because intuitively, the best optimizer in hindsight should always perform better. If yes, why the proposed method can achieve lowest loss in all experiments compared with all other optimizers? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The limitations of the proposed method is not explicitly discussed in conclusion, although future directions are pointed out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work and the valuable feedback. Given your suggestion, we have added a proof-of-concept experiment for neural network classification on the MNIST dataset, with Adam as one of the baselines. Additional comments and details on the experiment are in the top-level response. To address your questions: 1. In Line 7 of Algorithm 2, we assume access to the exact gradients of the function. If we use finite-difference, under Lipschitz loss functions, the approximation error of the gradients will be added linearly to the regret; in other words, the sum of all biases will be added to the final regret. If the gradients are stochastic with zero mean noise, the expected regret will remain the same. 2. The meta-regret of a method can be negative if the method does better than the best optimizer in the benchmark algorithm class $\Pi$. The proposed method can compete with a class of optimizers that include preconditioned methods (see appendix G and H), and Newton’s method converges in one step for quadratic minimization problems. Our baselines in the experiments are mostly first-order, since they are the most popular optimization algorithms. Given the positive evaluation of novelty and contribution in this review, and the additional nonconvex optimization experiment, we hope the reviewer is open to raising their score. --- Rebuttal Comment 1.1: Title: Thank the authors for the response Comment: I have read the authors' responses and other reviewers' comments. I think the authors satisfactorily addressed my concerns. I am glad to raise my score.
Summary: The paper considers a new framework, meta-optimization. To solve this problem, the authors propose an online control formulation with linear time-varying dynamics. Moreover, a novel algorithm is proposed to solve meta-optimization and shown to enjoy sublinear regret under both the quadratic loss and convex smooth losses. As an application, one can formulate the hyperparameters, such as learning rate, as a meta-optimization problem. Finally, the authors provided experiments to show the algorithm's superior performance compared to existing hyperparameter tuning benchmarks. Strengths: 1. The authors consider a new framework, meta-optimization, to choose the optimal hyperparameters. 2. The authors point out that meta-optimization with gradient descent methods can be viewed as an online control formulation with linear dynamics. 3. A new algorithm is proposed to solve the meta-optimization problems inspired by the online nonstochastic control formulation. New techniques are used to show the sublinear regret for meta-optimization with quadratic and convex smooth losses. 4. Comprehensive experiments are conducted and comparisons to benchmarks are provided. 5. The paper is well-organized and well-written. Weaknesses: 1. Several online optimization problems are proposed, while not all of them are used to reach the final goal. The authors may reorganize the presentation and clarify the relation between the formulations. 2. It is preferred if the authors can mention the techniques used in the main text following the main theorems. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have some minor questions: 1. The formulation proposed in Lines 57-58 seems not to be used in the analysis. Equation (5) is a more concrete formulation for meta-optimization. Could you explain why to need this generic formulation and how it is related to other formulations? 2. The online control formulation (1) - (3) involves a control variable $u_{t, i}$ while it is missing in (5). What is the meaning of this control signal for meta-optimization and, more concretely, in learning gradient descent step size? 3. What is the intuition behind using ideal cost $g_{t, i}$ in Algorithm 2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations and future work have been addressed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our work and the helpful suggestions. To address the weaknesses, we are very happy to clarify the online optimization formulations in the revision, and include more techniques that are used in proving the main theorems. For the questions: 1. This formulation uses short-hand notation on the episodic loss, but it is not necessary. It is the same as (5), which expands the episodic loss. We can remove the redundant one for the final version. 2. The control variable in (5) is implicit in $x_{t, i}^\pi$, where $\pi$ denotes the control policy. $x_{t, i}^\pi$ is the state reached under the policy $\pi$ at time $(t, i)$, so while the control cost $f$ doesn’t take the control signal $u_{t, i}$ into account, the state is still affected by the control policy. In meta-optimization, the control signal can be used to simulate the update of any optimization algorithm; therefore, through learning the best controls, we can compete with the best optimizer. The control signal, including learning the gradient descent step size, is a linear function of past disturbances, which largely depends on the past gradients. 3. We use the ideal cost because we need a loss function that depends only on $M$ to update it. The original control cost is a function of $x_{t, i}$, which depends on $M_{t, i}, M_{t-1, i}, \cdots$. On the other hand, the ideal cost is the cost that we would have incurred, if we started from the zero state and executed $M_{t, i}$ for $L$ time steps. It is a function of only $M_{t, i}$. There are two reasons why using the ideal cost works for non-stochastic control, which we elaborate in the appendix. In summary, the reasons are: - The system is stable, and therefore the control cost depends minimally on the controllers executed before $L$ time steps, e.g. $M_{t-L, i}, M_{t-L-1, i}, \ldots$. - We update the $M_{t, i}$’s with a memory-based OGD algorithm developed under the online convex optimization with memory framework, which accounts for the fact that the original control cost can depend on $L$ inputs, and still gives us the desired guarantee. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been well-addressed. This is an excellent theoretical work.
Rebuttal 1: Rebuttal: **please see attached pdf** We thank all reviewers for their time and valuable feedback. One common suggestion is including more complex experiments, for example with neural networks. Since the main contribution of this paper is theoretical, the experiments are proofs-of-concept to show the potential of this theoretical framework. The existing experiments are in the convex regime, consistent with theory and empirically support our theoretical results. For the purpose of showing potential of future applications/theory to practical optimization settings, we give a proof-of-concept additional experiment in the non-convex regime, namely MNIST classification with a neural network in the uploaded PDF file. As can be seen in the first experiment, given a good base learning rate ($\eta$ in the algorithm), meta-optimization improves over SGD, Adam, and momentum (with tuned hyperparameters). The performance gap also widens as there are more and more episodes. In the second experiment, we give meta-optimization a suboptimal base learning rate (0.5), and show that the meta optimizer not only outperforms SGD with lr=0.5, but also SGD with lr=1.0, and approaches the performance of momentum, the best baseline. Given these initial positive results, we believe that the meta-optimization framework holds much promise and this work lays a solid theoretical foundation for future research. Experimental details: We consider the MNIST classification task with a neural network. The network has 3 layers and is fully connected, the layer sizes are: [784, 512, 10]. We run for 5 episodes with 20 epochs in each episode. Since this problem is high-dimensional, we choose to learn scalar weights of past gradients. We take L=20, the meta learning rate = 0.005, and we have found that the results are insensitive to the meta-learning rate choice. The network is trained using stochastic gradients from batches of size 256. The hyperparameters of the baselines are tuned, and the search space is the following: SGD: learning rate = [1e-5 to 10] with increments of factor 10 Momentum: learning rate = [1e-5 to 10] with increments of factor 10, momentum = [0.9, 0.95, 0.99] Nesterov momentum: learning rate = [1e-5 to 10] with increments of factor 10, momentum = [0.9, 0.95, 0.99] Adam: learning rate = [1e-4, 1e-3, 1e-2], beta_1 = [0.9, 0.95, 0.99], beta_2 = [0.9, 0.95, 0.99] For Adam, the learning rates 1e-4 and 1e-2 were already suboptimal, so we did not search beyond them. Pdf: /pdf/827af921de517e9bb8bdc0d0cadf6b9094a5da4a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structured Neural Networks for Density Estimation and Causal Inference
Accept (poster)
Summary: This work studies the impact of imposing known conditional independence structure in fully-connected neural network architectures, notably in the setting of autoregressive normalizing flows. The independence is imposed by masking the weight matrices in the linear layers, similar to the MADE approach. The masks are determined by a factorization of the known adjacency matrix that (approximately) maximizes the number of connections (paths) between inputs and outputs. Through several experiments, the authors show generalization improvement over baselines. **Edit** Most of my concerns were addressed during the rebuttal/discussion period. I am therefore upgrading my score from 4 to 6. Strengths: * The manuscript is well written, with standard notations. * The contributions are clearly defined and the assumptions (known adjacency) are explicit. * The proposed approach to impose conditional independences is sound. * The claims are supported by the experiments, notably concerning improved generalization. Weaknesses: The idea of imposing prior independence knowledge into autoregressive flows was first introduced by Wehenkel and Louppe (2021), cited as [25] in this manuscript. As the authors mention (lines 246-247), StrAF only differs from GNF in the approach to impose the independences, but is conceptually identical. Actually, Wehenkel and Louppe (2021) already propose the approach of the present work: > An alternative approach would be to use masking scheme similar to what is done by Germain et al. (2015) in MADE as suggested by Lachapelle et al. (2019). In addition, the official [UMNN repository](https://github.com/AWehenkel/UMNN), also cited in this work, links to the normalizing flow library [Zuko](https://github.com/francois-rozet/zuko), from the same lab. The latter library implements autoregressive flow conditioners as masked multi-layer perceptrons for which the masks are a factorization of the adjacency matrix between the inputs and outputs. The similarities with the proposed StrNN are too strong to be left unaddressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Algorithm 1: It is not clear to me how the factorization algorithm is applied when the StrNN has more than one hidden layer. * Section 5.1: Are the same number of layers/neurons used for StrNN and MADE in this experiment? * Line 317: "As GNF permutes variables between flow steps". I was not able to find a mention of this in [25]. * Figure 5/Table 1: I don't understand how "ARF-10" and "GNF-10" can be so much worse than "GNF-1", as they are strictly more expressive. Is it an overfitting issue? Or maybe an invertibility issue? UMNN is not always numerically invertible. What about "ARF-1"? * Table 1: The authors make a distinction between "density estimation" and "sample quality", which does not make sense to me. If a flow perfectly estimates the density, it necessarily generates probable samples, unless the invertibility is not guaranteed. * Why not studying the use of StrNN in other settings than density evaluation, such as physics-informed machine learning, where it is common to infuse prior knowledge in the structure of the neural networks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: * It is never mentioned that a StrNN is a (pruned) fully-connected network with element-wise activation functions. The approach does not apply to convolutional, attention-based or recurrent networks, and does not support skip/residual connections or normalization layers. * This is not a limitation of this work, but one should be careful not to confuse Bayesian networks and causal graphs. A Bayesian network (or its adjacency matrix) over variables merely indicates independencies between the variables but in no way causalities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer LoZH for their detailed feedback, especially for highlighting the clarity and soundness of our approach to enforce conditional independences in NNs. We begin with the Zuko repo, which is relevant to our work. First, are there manuscripts or research results using Zuko that we could cite? We could not find any based on the GitHub authors' profiles. Otherwise we will reference the repo in our revision. Second, we note that the **code in the Zuko repo (as cloned on the NeurIPS submissions deadline)** **does not allow users to initialize any of its flows to encode a global adjacency structure**, and attempting to do so gives an error. Third, **while Zuko and StrNN’s greedy factorizations of the adjacency matrix are similar, they yield different results.** In the experiment below, mask matrices found by our greedy algorithm and by Zuko result in different performance. Zuko operates on the unique rows of an adjacency matrix, which causes issues when the matrix contains many repeating rows. This type of structure may be naturally encountered in datasets with star shaped graphs. Consider a NxN matrix where the first (N-1) rows contain dependence on the first variable ([1, 0, …, 0]) and only the last row depends on the second variable ([0, 1, 0, …, 0]). Given a budget of H hidden units, Zuko would assign H // 2 units to represent the variable corresponding to the last row. This ineffectively represents the dependence of all other outputs on the first variable, and is avoided by our greedy algorithm. In a non-linear dataset with N variables and the above adjacency matrix, we compare StrAF with our greedy algorithm and Zuko’s algorithm in the table below. We report runs from 5 random seeds, using a 95% CI and N hidden units. | Model | Test NLL (N=50) | Test NLL (N=1024) | | --- | --- | --- | | StrNN-Greedy | 0.4311 ± 0.5406 | -18.734 ± 1.999 | | Zuko | 1.3276 ± 0.2806 | -14.550 ± 0.936 | We see our method performs better, especially as N increases. **This supports our claim that the choice of mask factorization can impact performance beyond enforcing independences.** Fourth, Wehenkel and Louppe (2021) and Lachapelle et al (2019) did suggest a masking scheme like MADE. **But there is a gap between the suggestion of an idea and a practical instantiation thereof.** Neither works studied any concrete algorithms or experiments to this end, and only raised the possibility of such an approach for future work. **We believe our work establishes the feasibility of the method, conducts rigorous evaluations, and demonstrates its application in a novel context.** We reply to other questions and concerns below: > Algorithm 1: … how the … algorithm is applied when ... StrNN has more than one hidden layer. > **We apply the factorization algorithm recursively, layer-by-layer, for efficiency and simplicity.** Given $A\in \{0, 1\}^{d \times d}$, we run Algorithm 1 once to find $ A_1 \in \{0, 1\}^{d \times h_1}$ and $ M^1 \in \{0, 1\}^{h_1 \times d}$ for a layer of width $h_1$ such that $A_1\cdot M^1 \sim A$, where $\sim$ denotes the matrices share the same sparsity. For the next layer with $h_2$ units, we use $A_1$ in place of $A$ to find $A_2 \in \{0, 1\}^{d \times h_2}$ and $M^2 \in \{0, 1\}^{h_2 \times h_1}$ such that $A_2 \cdot M^2 \sim A_1$. Repeat until we have all the masks. > Section 5.1: same number of layers/neurons used for StrNN and MADE …? > **Yes**, the NNs in StrNN and MADE have the same number of layers/hidden units to control for capacity. > Line 317: "As GNF permutes variables …". … not able to find a mention of this in [25]. > **See the following link (with line #) in the GNF repo where the authors permute variables between flow steps:** https://tinyurl.com/vs92kncc. Variable permutation is common in the normalizing flow literature. We believe pointing out the necessity of properly handling adjacency masks with respect to this operation to be a simple yet impactful observation. > Figure 5/Table 1: how "ARF-10" and "GNF-10" can be so much worse than "GNF-1" … > **High expressivity of a model does not necessarily lead to better generalization.** Though we have conducted hyper-parameter/model selection for all methods, as detailed in Appendix E.2, the baseline models can still over-fit on spurious correlations. When an adjacency matrix is known, we can avoid this, leading to the improvement of GNF-1 over ARF-10. As shown in Figure 2 and the link above, GNF does not properly handle variable permutations and adjacency masking, leading to an incorrect global adjacency matrix, hence the drop in performance between GNF-1 and GNF-10. We will edit our final revision for clarity on this matter. > Table 1: … distinction between "density estimation" and "sample quality"… > We thank the reviewer for allowing us to clarify this subtle point. This distinction arises from the metric used to assess how *well* the flow estimates the target density. Common divergences between probability distributions used to learn the flow, e.g. KL divergence, are weighted by the target distribution. Thus, the loss is not very sensitive to the model’s performance in low-probability regions of the target. This behaviour might lead to poor “sample quality” even if the likelihood values on the test set is high. > … StrNN in other settings than density evaluation, such as physics-informed machine learning … > This is a great direction for future research! In our paper, we studied causal effect estimation as a novel application area. **We will keep physics-informed ML in mind for future work.** > (StrNN) does not apply to convolutional, attention-based or recurrent networks, .. does not support skip/residual connections or normalization layers. > **StrNN can be extended to several of these architectures.** While our focus is on MLPs, the StrNN can be applied in the linear layers of recurrent networks to enforce structure throughout the sequence. **This is beyond the scope of this work.** --- Rebuttal Comment 1.1: Comment: Thank you for your answers and the experiments you have conducted in such a short time. You have addressed most of my concerns. I appreciate your honesty regarding the similarities between your factorization algorithm(s) and the one present in Zuko. I also appreciate you taking the time to understand and highlight the differences, which, as you demonstrate, can impact the performances beyond enforcing the independences. I would like to mention that, to my understanding, one factorization is not necessarily better than the other. They are different and will lead to different results for different adjacency structures. Nevertheless, you rightfully observe that, even though the factorization algorithm is present in Zuko, users are not easily able to initialize their flow to respect a particular adjacency matrix. After some research, I did not find a publication linked to the Zuko library, or to the factorization algorithm therein. I believe a reference to the repo as well as a short discussion of the similarities/differences would be enough. Finally, I agree with you that there is a gap between the suggestion of an idea and its implementation, and your work goes beyond this idea by demonstrating the impact of the factorization in terms of generalization. Concerning my questions, I thank you for your clear answers. I would suggest you to add some of these within the manuscript/discussion. In view of the rebuttal and the proposed changes, I am willing to reconsider my score. I believe your work, although light from a theory perspective, is a valuable contribution to the density estimation community. --- Reply to Comment 1.1.1: Comment: Dear Reviewer LoZH: Thank you for the prompt response to our rebuttal. We are very glad that we have managed to address most of your concerns. We agree it is unlikely there is one factorization scheme which is optimal for all possible adjacency matrices, which is a limitation we will discuss in our revision. However, we do believe some factorization objectives/algorithms are better than others on average due to how they dictate the resulting masked NN architecture. We have identified this as an important area to formalize and investigate in future work. In our rebuttal, we used one example where our greedy algorithm outperforms Zuko to highlight one key difference. At the same time, we acknowledge that there might be specific adjacency structures for which our greedy algorithm would do less well. Nonetheless, StrNN also gives us the ability to choose our factorization objective based on prior information, which we believe will help shed light on this in the future. We are currently editing our manuscript for clarity based on all reviewer suggestions and questions. Thank you again for helping us improve our work! If you do find our responses satisfactory, we would greatly appreciate it if you could re-evaluate our paper for a higher score. Thank you for your time. Best, Authors
Summary: The authors propose a novel method for constructing structured neural networks (strNN) that are able to respect causal independencies between variables. Formal constraints for weight mask creation are discussed and evaluated empirically for an exact method and a greedy algorithm. The conditioned strNN are furthermore leveraged for conditioning autoregressive flow models. Practical application successfully is demonstrated over multiple synthetic datasets with comparisons between several baseline models, with and without the use of adjacency information. To the best of my knowledge related work is discussed sufficiently in the context of causal density estimation. Strengths: 1. The authors propose a novel method for constructing structured models via weight masking that integrates seamlessly with existing neural architectures while respecting the strict independence assumptions of causal models. Preconditions and assumptions for application of the approach, specifically knowledge about the causal graph structure, are clearly stated. 2. To tackle the infeasibility of exact mask creation on larger graphs a greedy algorithm is proposed and its practical application is demonstrated. 3. The authors additionally include experiments on binary MNIST data, for which the underlying causal structure is unknown. By imposing a causal graph structure which promotes the usage of spatially local information, the authors improve performance for non-synthetic data over baselines. Weaknesses: The example shown in Figure 1 decomposes the network into two separate networks. However, constructing the displayed a network would not require a complicated mask decomposition, but could be trivially solved by constructing two independent networks with constrained layer width. Only by inspecting the example provided in the appendix it is revealed that the presence of split-structures leads to shared weights between the outputs. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. As causal mechanisms are often assumed to be independent in causal literature, I would like to ask the authors about the benefits or downsides of allowing for such shared weights within the network. 2. Furthermore, I would like to ask the authors to discuss possible simplifications for specific causal structures, e.g. in the case of independent causal mechanisms as seen in Figure 1. Overall the idea is pretty good with a clever way of enforcing the causal independencies. However, all of this is assuming that the networks can leverage shared information between different mechanisms. If that is not the case then you could just train an independent density estimator for every single edge and (from a purely causal perspective) the problem becomes trivial to solve. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No concerns here Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer icwN kindly for their encouraging and constructive feedback, especially for highlighting the novelty and efficacy of our matrix factorization approach to enforcing exact independence structures in arbitrary NNs. > The example shown in Figure 1 decomposes the network into two separate networks. > We will use an alternative example adjacency matrix in Figure 1 which requires shared weights in our final revision. We thank the reviewer for their attention to detail on this matter. A main point in the reviewer’s feedback is regarding our design choice to use one single neural network with shared weights in StrNN. In order to enforce independence structure among variables in a single NN, we are required to solve the binary matrix factorization problem central to our paper. The reviewer is correct in saying we can instead find a separate NN for each output variable, which only takes that variable’s dependencies as inputs. That approach would be very simple and render masking and matrix factorization procedures unnecessary. However, W**e believe that the particular way we share weights in StrNN has several advantages**: 1. **Sharing weights is more efficient for improving both memory and computational efficiency against other approaches for masking.** For example, when there are D input features, using D independent networks or the GNF baseline (which masks input variables) necessarily requires D forward passes for a single input example. In comparison, we can do the same computation in a single forward pass. This becomes especially relevant in contexts such as continuous normalizing flows (CNFs), where many network evaluations must be done in the ODE solver. Here, methods with poor asymptotic complexity become impractical. In our additional results for integrating StrNNs into CNFs with the dataset from Section 5.3 (as described in the **general author response**), we find that the StrNN model outperforms a baseline FFJORD even when using a number of hidden units on the same order of magnitude (as opposed to D times more, see **supplementary rebuttal PDF** for exact experiment details). This would not be possible unless the model leveraged weight sharing in StrNNs, providing further empirical justification for the utility of this choice. 2. **From a causality perspective, one possible benefit from sharing weights could come from robustness to model misspecification. T**he best model depends on the actual problem at hand, its associated data, and possibly the true generative process. To our knowledge, an empirical study comparing sharing weights vs independent models does not exist and would be interesting for the community. Further investigation into this question using the StrAF extension of CAREFL we built in Section 5.4 of our paper would be a promising direction for future research. However, as suggested in [1], if the functional forms of the independent mechanisms are misspecified, then variables outside of the Markov blanket may improve the predictive model performance. By sharing weights inside the network, gradient information from other variables’ relationship can improve generalization of a given conditional distribution. Lastly, the reviewer asked us to discuss > possible simplification of specific causal structures, i.e., independent causal mechanisms as in Figure 1. > **We believe this is a fruitful research direction of applying our strNN framework to other statistical models in causal inference that may inherit sparse structure.** Two possible simplifications are: 1. **Additive noise models with a common noise mechanism**, i.e., the data-generating process for each variable $x_k$ is given by $x_k = f_k(x_1,\dots,x_{k-1}) + eps$, where eps is independent of k. In this case, the strNN outputs will only need predict the conditional expectation $f_k = \mathbb{E}[x_k | x_1,\dots,x_{k-1}]$ for all k, and a separate network is used to model the distribution of the noise variable $eps$. 2. **Graphical models with sub-groups of connected components**. For example, Figure 1 shows a model with four variables where (x_1,x_2) is a connected group, and (x_3,x_4) is connected to the first group by an edge from x_2. In this case, we could apply strNN to groups of variables in a hierarchical way: first to the sub-group using a single edge between then, and then to the variables within each group. Finally, we would like to extend our thanks once again to Reviewer icwN for the insightful feedback provided. We believe that the changes made to our manuscript can address your questions and concerns. We hope you will find the revisions satisfactory and consider a reevaluation of our paper's score. Should you have any further feedback or questions, please feel free to share. [1] Peters J, Janzing D, Schölkopf B. Elements of causal inference: foundations and learning algorithms. The MIT Press; 2017. Chapter 6. Multivariate Causal Models. p.103. --- Rebuttal Comment 1.1: Title: Thank you for the Response Comment: I would like to thank the authors for their response. I am happy with their response and am still positive about the paper and it's contributions.
Summary: In this paper, the authors present a neural network architecture that can fulfill the bayesian DAG conditional independencies needed for normalized density estimation. In this work, the authors start from a binary lower adjacency matrix that encodes the independencies of a bayesian network DAG. Then they introduce a factorization of the global adjacency matrix into L adjacency matrices, which can be used as masks on each layer. This construction allows neural networks to be trained for a normalized density estimation task. The authors introduce an heuristic to exactly factorize the adjacency matrix for the different layers using two objective functions. With this building block the authors proceed to create a normalizing flow architecture that respects the independency restrictions end-to-end. The authors then compare their approach empirically on different task against MADE, a neural density estimator that allows general dependencies for a given random variable ordering. Strengths: I found the paper insightful, and the results show that the approach is beneficial. The paper is technically sound and easy to read. The results showing an improvement in data efficiency for a given negative likelihood are also very interesting. The introduction of the normalizing flow approach and the comparison in the causal setting are also nice additions that can have impact in the broader community. The experiment on the sample quality shows the benefit of restricting the dependencies in the network as it cuts paths for noise in other random variables (and feature transformations) to propagate through the network. Weaknesses: The major weakness of this paper is the limited empirical section in comparison to other papers in this domain. This can cause readers to wonder if the benefits of the new approach as density estimators are not significant for other datasets. A broader comparison on other datasets would make the paper more robust. Also and I'm considering this as a minor weakness in my review, is that the method although insightful does not provide a way to obtain the global adjacency matrix. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As I was reading the paper, my first thought would be that you would explore the possibility of discovering the dependencies for a given order. Here one can start with a dense adjacency matrix, train the network, clip nodes in the layers according to Lottery Ticket Hypothesis, and propagate the masks forward, i.e., multiplying all the masks to get the adjacency matrix. As you are clipping, the adjacency matrix is guaranteed to either be the same or introduce independencies. At that point, you can even use your factorization algorithm again to obtain a network with more/other nodes active while still respecting the new independencies. I'm wondering if you explored similar ideas during your research? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The main limitations of the approaches presented are inherited from the restrictions on ordered models, e.g., marginalization and map queries are intractable for the general case. This is not mentioned in the paper. There are no potential negative societal impact to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to warmly thank Reviewer Wq9i for their encouraging feedback, especially for highlighting our method’s emphasis on data efficiency and the novelty of the causal effect estimation application. The reviewer raises a good point on how broader comparison on other datasets would make our paper more robust. While the synthetic datasets included in our evaluation are highly non-linear and challenging to model, we are working on extending our method to several real world datasets. We have obtained favorable density estimation results in adding StrNN to continuous normalizing flows such as FFJORD, which we have included in the general author response. This further validates StrNN’s efficacy as a density estimation technique. **re: StrNNs and structure discovery** > Explore the possibility of discovering the dependencies for a given order. Here one can start with a dense adjacency matrix, train the network, clip nodes in the layers according to Lottery Ticket Hypothesis, and propagate the masks forward … I'm wondering if you explored similar ideas during your research? > That is a great point! Overall, structure discovery is outside the scope of our paper since we assume either the ground truth Bayesian network or a good estimate is known. However, **we are interested in pursuing future work in using StrNN to learn structure from data, thus providing a full pipeline from structure discovery to density estimation and sample generation.** **We have considered the following potential connections to dropout and the lottery ticket hypothesis during our work.** First, we noticed that stochastically introducing sparsity into a neural network has interesting connections to dropout, which is also known to improve model generalization. It would be interesting to investigate if specific patterns of sparsity emerge from this stochastic process, and as the reviewer points out, if our method provides a framework with which to directly optimize for any specific qualities of sparsity while respecting structure. Second, we have considered a gradient-based approach to learn structure from data as well, where we might add a group regularization penalty to the weights in order to penalize the dependence of certain marginal conditionals on individual input variables. > The main limitations of the approaches presented are inherited from the restrictions on ordered models, e.g., marginalization and map queries are intractable for the general case. This is not mentioned in the paper. > Lastly, we thank the reviewer for raising an important point on the core limitation of all autoregressive models, which is the difficulty of performing marginal and MAP inference. We will make this point clearer in our final revision. We would like to thank Reviewer Wq9i once again for your constructive feedback. We are confident our revised manuscript can address your questions and concerns, and we kindly invite you to reconsider the score for our paper if you are satisfied with our responses. Please don’t hesitate to let us know of any additional comments! --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I still consider the paper in a positive light and after following the discussion, I will keep my score as is.
Summary: This paper introduces structured neural networks such that the resulting neural network represents the factorization of a given Bayesian network. For doing so, each layer of the neural network is masked and the product of the masks of all layers must be the same as the adjacency matrix of the DAG representing the Bayesian network. With this construction, the represented conditional dependencies with structured neural networks will be consistent with the given Bayesian network. The paper proposes a simple greedy algorithm to find the masks. It also proposes using the structured neural network to construct the coupling layers for normalizing flow and claims that the resulting generative model is better for casual inference (intervention and counterfactuals) than the prior approach. Strengths: The paper is well-written and the contribution towards causal inference is solid. Weaknesses: 1- The structured neural network augments MADE with a better mask construction algorithm such that the factorization can be defined for any DAG structure. However, it is not a fundamentally different model. 2- The paper didn't propose any approach for learning the structure of DAG given the provided structured neural network parametrization. 3- MADE is not a strong density estimator and comparing only to MADE does not validate the strength of structured neural networks as density estimators. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How GNF would compare to StrAF if it does permute the latent variables after the first step? 2) During the comparison with CAREFL did you provide CAREFL with external DAG orders that StrAF uses? If not, the learned causal order by CAREFL may not exactly specify the underlying DAG, which may result in lower performance in causal inference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer Tj4q for their feedback and insightful comments, especially for highlighting the simplicity of our matrix factorization approach and our contribution to the causal inference application. First, we hope to provide some further clarification on the questions posed by the reviewer: > How GNF would compare to StrAF if it does permute the latent variables after the first step? > **We believe GNF performs worse than StrAF when the latent variables are permuted after the first step.** The GNF already permutes latent variables after the first step, but does not address the effect of this operation on the conditional independencies enforced by the overall network (i.e., the resulting adjacency matrix between inputs and outputs). This is consistent with how the GNF authors have implemented their official code-base (see https://tinyurl.com/vs92kncc). As we visualize in Figure 2, this permutation can encode incorrect connectivity between the variables. Our experiments show this causes higher MMD values between samples from GNF and the true distribution, in comparison to StrAF. > During the comparison with CAREFL did you provide CAREFL with external DAG orders that StrAF uses? … > **Yes, we provide the ground truth DAGs to both StrAF and CAREFL during the evaluations for causal inference.** However, CAREFL can only utilize the autoregressive order provided whereas StrAF enforces the entire adjacency structure, which results in improved performance. Even though CAREFL discussed approaches for extending their pairwise discovery method to multivariate data, they assumed the autoregressive orders are given in their causal inference experiments. Similarly, one major assumption in our experiments is that the true structure has been resolved before modelling the generative process. Next, we address some of the weaknesses of our paper as identified by the reviewer. > The structured neural network augments MADE with a better mask construction algorithm such that the factorization can be defined for any DAG structure. However, it is not a fundamentally different model. > We believe that, **while StrNN is similar to MADE, the augmentation itself makes a significant contribution to the literature** for the following reasons: 1. While MADE identifies an approach to enforce autoregressive dependence in NN outputs by masking weights, their algorithm does not consider sparse autoregressive dependence, which appears in many applications such as causal inference. 2. The way we enforce structure through matrix factorization allows us to directly specify a neural architecture. As far as we know, this important link between masking and NN architecture has not been explicitly discussed or explored in MADE or other prior works. We refer reviewer Tj4q to our overall author response for additional experiments on how the factorization objectives can impact NN performance. > The paper didn't propose any approach for learning the structure of DAG given the provided structured neural network parametrization. > **In the conclusions and limitations section of our manuscript, we mention that while we assume access to the true adjacency matrix, one can use any SOTA structure discovery algorithm, such as NO TEARS, in conjunction with StrNN and StrAF, as long as one keeps the flow layers un-permuted.** We will also add an additional reference to the causal-learn [1] package, which provides a variety of out-of-the-box structure discovery algorithms that can be easily integrated into our StrNN/StrAF code. **While structure discovery is outside the scope of this paper, we are interested in pursuing future work in using StrNN to learn structure from data, providing a full pipeline from structure discovery to density estimation and sample generation.** One option is to add a group regularization penalty to the network masks or weights in order to penalize the dependence of certain marginal conditionals on individual input variables. We have also thought of an alternative stochastic approach to structure discovery, by using dropout to reduce the total number of connections, which is similar to the proposal from reviewer Wq9i based on the lottery ticket hypothesis. > MADE is not a strong density estimator and comparing only to MADE does not validate the strength of structured neural networks as density estimators. > We compare to MADE as a baseline because it is the natural choice that assumes a full autoregressive factorization. We do not claim that StrNN is the strongest binary density estimator, rather we show that **adding structure to several existing methods improves their performance.** We later show that incorporating StrNN into popular continuous-valued density estimation methods based on autoregressive flows improves their performance. Our baseline, GNF, is a very strong density estimator, and Table 1 in our paper shows that StrAF can improve upon GNF for both test likelihood values and sample quality. To reinforce this point, we incorporated StrNN with continuous normalizing flows, FFJORD. We report results in the overall author response. We observe that adding StrNN improved performance against the baseline FFJORD model, again demonstrating the strength of StrNN, not as an independent density estimator, but as a drop-in replacement to improve the performance of density estimators by leveraging adjacency structure. We aim to further study StrNN to improve current SOTA density estimators, such as diffusion models. Lastly, we thank Reviewer Tj4q once again for your constructive feedback. We are confident our revised manuscript can address your questions and concerns, and we kindly invite you to reconsider the score for our paper if you are satisfied with our responses. [1] Zheng Y, Huang B, Chen W, Ramsey J, Gong M, Cai R, Shimizu S, Spirtes P, Zhang K. Causal-learn: Causal Discovery in Python. arXiv preprint arXiv:2307.16405. 2023 Jul 31.
Rebuttal 1: Rebuttal: We thank all reviewers for your engagement and thoughtful comments. We have addressed specific concerns and questions in individual rebuttals, but here we **highlight two experiments added based on the feedback**: 1. We extend Section 5.3 with experiments on **continuous normalizing flows (CNF)**. We choose FFJORD [1] (denote as **FFJORD-FC**), a popular strong density estimator, as our baseline, and improve it by injecting prior known structure via StrNN. In particular, we replace the feed-forward network used to model the flow dynamics with a StrNN, denoted **FFJORD-StrNN**. We also compare against [2], which also attempts to encode structure into the ODEnets of a FFJORD model (denote as **FFJORD-Weilbach**). However, FFJORD-Weilbach does not use mask factorization and directly multiplies an adjacency matrix onto a single hidden layer. Thus, they are restricted to a single hidden layer with same number of units as the input dimension, causing underperformance. We note that [2] proposes interesting orthogonal avenues of improving CNFs which we can explore in the future. Using the dataset from Section 5.3, we perform a hyperparameter search (**Table 1** in PDF document). We report Test NLL with a 95% CI based on 8 runs from random initializations below: | Model | Test NLL | | --- | --- | | FFJORD-FC | -3.9718 ± 0.0677 | | FFJORD-StrNN | **-4.0583 ± 0.0089** | | FFJORD-Weilbach | -1.0624 ± 0.0724 | Integration of StrNN results in significantly better performance against both baselines. **We believe this reinforces the comments identifying StrNN as a generally applicable, drop-in tool to improve existing density estimators.** 2. We highlight the importance of the StrNN capability to **specify different objectives in the adjacency matrix factorization algorithm**. In particular, we wish to demonstrate the consequences of sub-optimal factorizations. We show density estimation performance using two alternative factorizations: (1) **mincon**, which minimizes the number of connections while remaining faithful to the adjacency matrix, and (2) **random**, which recreates the random factorization scheme from [3]. We compare these two against our **greedy** algorithm when initializing StrNNs in a StrAF model. Our dataset is based on a 10-dimensional sparse linear additive SEM, similar to Section 5.4. We use a 5-step flow with 3 hidden layers each with 20 hidden units. We report test NLL with a 95% CI based on 5 runs from different random initializations. We also report the fraction of remaining connections in the masked NN averaged across layers, where 1.0 is equivalent to a fully connected network, and the test NLL when the flow is trained using N samples. | Factorization Method | Fraction remaining connections | Test NLL (N=100) | Test NLL (N=800) | | --- | --- | --- | --- | | Greedy | 0.255 | **15.765 ± 0.066** | **14.960 ± 0.115** | | Random [3] | 0.1245 | 16.733 ± 0.291 | 15.376 ± 0.183 | | Mincon | 0.085 | 17.607 ± 0.222 | 17.094 ± 0.078 | We also visualize the trend of these results as a function of dataset size (Figure 1 in the attached PDF). **We see clear improvement using the greedy approach, reiterating the importance of factorization with respect to an objective.** Furthermore, we clarify **common concerns** the reviewers have raised: 1. **Comparisons to MADE**: Reviewers noted that we drew inspiration from MADE to mask weight matrices. We believe **our extension of this idea to enforce a specific, sparse adjacency structure is novel**. Further, a significant contribution is our observation that the **objective of the mask factorization is directly linked to NN architecture, which can affect performance**, as demonstrated in our experiments above. 2. **Alternative methods for enforcing structure (such as GNF):** Reviewers noted that other methods of enforcing a prescribed adjacency structure exist. Given D-dimensional input, one could use GNF, or initialize D independent neural networks, each accepting the dependent features for a specific variable. However, both these approaches require D forward passes to compute the output of a single input vector, whereas we only require one. **The lower asymptotic complexity of evaluating our model enables higher training efficiency** and is important when incorporating StrNN into density estimators such as CNFs (see above), which require many forward evaluations to compute a single ODE solution. 3. Several reviewers correctly noted that our current work on StrNN does not have the capability to **learn Bayesian networks from data**. **This was by design given the main assumption in our work was that we would have access to the ground truth DAG, whether from domain knowledge or prior experiments with structure discovery algorithms.** To direct readers to these resources, we will add citation to the **causal-learn package** in our final revision. We have further considered different ways we could utilize StrNN itself to learn structure from data, as discussed in our response to Reviewer Tj4q. Finally, we would like to thank all reviewers once again for your insightful responses, which have helped us refine our work and identify future directions. We are confident our updated manuscript will have addressed your concerns and suggestions, and we kindly invite you to reconsider our paper if you are satisfied with our responses. [1] Grathwohl, Will, et al. Ffjord: Free-form continuous dynamics for scalable reversible generative models. *International Conference on Learning Representations*, 2019. [2] Weilbach, Christian, et al. "Structured conditional continuous normalizing flows for efficient amortized inference in graphical models." *International Conference on Artificial Intelligence and Statistics*. PMLR, 2020. [3] Mouton, Jacobie, and Steve Kroon. "Graphical residual flows." *arXiv preprint arXiv:2204.11846* (2022). Pdf: /pdf/228a9a3bb2f5fd2372f0cd65734b0c8daa71792d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss
Accept (poster)
Summary: In this work, the authors focus on improving the generalization ability of the top-K recommendation model by a proposed principled Adversarial InfoNCE loss (AdvInfoNCE). Existing contrastive learning based methods usually lack considering the tailored inductive bias (such as hard negatives and false negatives) and sufficient theoretical understanding for the generalization ability. The proposed AdvInfoNCE loss could adaptively explore and assign hardness (weight) for each negative instance in an adversarial fashion. The theoretical guarantees and experiments demonstrate the effectiveness of the proposed AdvInfoNCE loss. Strengths: 1. The motivation that adaptively assigns hardness for each negative instance in an adversarial fashion to improve the generalization ability is justified. 2. The theoretical proof demonstrates the generalization ability of the proposed AdvInfoNCE loss. 3. The experiments on unbiased datasets and out-of-distribution settings demonstrate the effectiveness of the proposed AdvInfoNCE loss in terms of generalization ability. Weaknesses: 1. It would be better to add an intuitive example to show the adaptive learning process for hard negatives and false negatives in the min and max stages. 2. In terms of adaptively learning hardness for each negative instance, it would be better to add some baselines focusing on mining hard negatives or false negatives. 3. In terms of the generalization ability, it would be better to add some baselines focusing on out-of-distribution. Besides, OOD experiments are also performed on popularity-based distribution shift scenarios, what’s the difference compared with debiasing experiments? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you please give an intuitive description to show the adaptive learning process for hard negatives and false negatives in the min and max stage ? Especially for hard negatives and false negatives? 2. Why the corresponding item is identified as a hard negative rather than a general negative when sigma > 0 ? 3. In terms of adaptively learning hardness for each negative instance, could you please add some baselines focusing on mining hard negatives or false negatives [1,2] ? 4. In terms of the generalization ability, could you please add some baselines focusing on out-of-distribution [3,4] ? [1] Chen, Jin, et al. "Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever." In NIPS, 2022. [2] Chen, Jin, et al. "Learning Recommenders for Implicit Feedback with Importance Resampling." In WWW, 2022. [3] Hongyi Wen, et al. "Distributionally-robust Recommendations for Improving Worst-case User Experience". In WWW, 2022. [4] An Zhang, et al. "Invariant Collaborative Filtering to Popularity Distribution Shift". In WWW, 2023. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately point out the limitations and there is no negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer $\color{green}{\text{1hVj}}$** We gratefully thank you for the valuable comments. To address your concerns, below we provide the point-to-point responses. >**Comment 1 + Question 1: Intuitive example** We value your insightful comments. To better clarify the role of $\delta$ and its learning process for hard and false negatives, we **conducted additional illustrative examples**. The illustrative example should highlight two points. 1. AdvInfoNCE could effectively **identify the false and hard negatives** via learnable $\delta_j$ (max-stage). 2. $\delta$ helps to **refine the item ranking** compared to InfoNCE (min-stage). - **Identification of False and Hard Negatives:** On the Tencent training data, we trained both the InfoNCE and AdvInfoNCE models. Interactions unobserved during training but present in testing are labeled as false negatives (FN), otherwise true negatives (TN). Based on our theoretical assumption, for a FN, we wish a more relaxed constraint, leading to $\delta < 0$. To validate this assumption, we introduce the 'FN identification rate', a metric determining the proportion of FNs where $\delta < 0$. As Fig 6 in one-page uploaded pdf shows, our observations are consistent with our claim. As training proceeds, the FN identification rate increases, capping at nearly 70%. This reveals AdvInfoNCE's capability to identify approximately 70% of FNs in test set. We attribute the superior performance of AdvInfoNCE over InfoNCE to this gradual identification. - **Refinement of Item Ranking:** we randomly draw two users along with their FN and TN items, subsequently retrieving their associated $\delta$ values, ranking positions, and cosine similarities, as demonstrated in Fig 7. Consistent with our prior findings, for an FN, AdvInfoNCE generally assigns a negative $\delta$. This negative $\delta$, indicating a more lenient feasible zone constraint, enables the recommender to achieve higher cosine similarity. This, in turn, escalates the FN's ranking. For instance, as Fig 7(a) shows, given $\delta=−0.7887$, AdvInfoNCE elevates an FN from the 55th to a commendable 5th position. Conversely, for a TN, AdvInfoNCE leans towards a positive $\delta$, narrowing the feasible zone, thus distancing it from positives. An exemplary case is the TN $j_{6543}$ in Fig 7(a), where AdvInfoNCE, upon learning its $\delta = 1.1921$, declines its rank from 257th to 4587th. Such real-world cases attest to $\delta$'s role in fine-tuning recommendation ranking. In a word, for a specific u, the learnable $\delta$ variable measures the item hardness and further frame a fine-grained ranking criterion. >**Comment 2 + Comment3: Add mining hard negatives & OOD baselines** Thanks so much for bringing these excellent works to us. Folloing your suggestion, we have **conducted additional experiments** on XIR [1], S-DRO [2], and InvCF [3] on Tencent. We appreciate your recommendation of including AdaSIR [4]; however, due to the constraints of the rebuttal period, we have planned to consider it as a baseline in our future work. **Table 1: Overall performance on Tencent** | | | $\gamma=200$ | | | $\gamma=10$ | | | $\gamma=2$ | | Validation | | :----: | :----: | :----------: | :----: | :----: | :---------: | :----: | :----: | :--------: | :----: | :--------: | | | HR | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG | NDCG | | XIR | 0.1463 | 0.0538 | 0.0326 | 0.0936 | 0.0341 | 0.0211 | 0.0642 | 0.0245 | 0.0154 | 0.0883 | | sDRO | 0.1455 | 0.0516 | 0.0286 | 0.0857 | 0.0304 | 0.0166 | 0.0552 | 0.0205 | 0.0110 | 0.0872 | | InvCF | **0.1651** | **0.0605** | 0.0331 | 0.1061 | 0.0386 | 0.0204 | 0.0722 | 0.0272 | 0.0149 | **0.0912** | | AdvInfoNCE| 0.1600 | 0.0594 | **0.0356** | **0.1087** | **0.0403** | **0.0243** | **0.0774** | **0.0295** | **0.0180** | 0.0879 | We observe that: - AdvInfoNCE consistently exhibits superior performance compared to hard negative mining methods and OOD baselines across varying test set levels in terms of NDCG metric. Such results underscore the robustness and generalization ability of AdvInfoNCE. - For the case where $\gamma = 200$, although InvCF slightly edges out AdvInfoNCE in HR and Recall, a potential explanation lies in InvCF's design which selects in-batch negatives (1024 in total). In contrast, AdvInfoNCE uses only 128 negatives. As can be inferred from another analysis (Table 3 in response for the first Reviewer ikze), amplifying the number of negative sampling could boost the performance of AdvInfoNCE. (Sorry for inconvenience. Due to space constraints, I can not directly include Table here.) - Specific Strength of XIR: XIR demonstrates enhanced performance particularly when faced with data exhibiting a long-tailed distribution, i.e., $\gamma =200$. We attribute this advantage of XIR to adaptively achieve a more accurate estimation of the softmax distribution. >**Question 2: General or hard negative** Your question goes to the heart of our method. In our paper, we define an item with $\delta > 0$ as a hard negative based on the gradient analysis presented in Appendix B.3. The gradients corresponding to the negative item j have a proportional relation to the $\exp(\delta_j)$. In other words, for $\delta_j >0$, the recommender exhibits more attention to this item by a factor of $\exp(\delta_j) >1$. Such characteristics naturally fall under the category of hard negatives based on hard negative mining. [1] Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever. 2022 [2] Distributionally-robust Recommendations for Improving Worst-case User Experience. 2022 [3] Invariant Collaborative Filtering to Popularity Distribution Shift. 2023 [4] Learning Recommenders for Implicit Feedback with Importance Resampling. 2022 --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and responses. I will keep my rating. --- Rebuttal 2: Title: Follow-up discussion Comment: Thank you for your valuable feedback on our submission, particularly your suggestions to include **an illustrative example** and add **XIR, AdaSIR, S-DRO, and InvCF** as new baselines. These insightful suggestions enhance the quality of our work and better strengthen our claims. We hope that these improvements will be taken into consideration. If we fully address your concerns about our paper, we would be grateful if you could re-evaluate our paper. If you have additional concerns, we remain open and would be more than happy to discuss with you.
Summary: This paper is on collaborative filtering (CF) enhanced by contrastive learning (CL). The authors point out that the adoption of CL into CF is suboptimal due to challenges such as the issue of out-of-distribution, the risk of false negatives, and the nature of top-K evaluation. They also note that current CL-based CF methods lack consideration of the tailored inductive bias for CF and have limited theoretical understanding of their generalization ability. To address these limitations, the authors propose a principled Adversarial InfoNCE loss for CF that focuses on mining hard negatives and distinguishing false negatives from the vast unlabeled user-item interactions. The proposed method is compared with several state-of-the-art contrastive learning-based CF methods on both unbiased and synthetic datasets. The experiments show that the proposed method outperforms the baselines in terms of accuracy and robustness, demonstrating its potential for improving the performance of recommender systems. Strengths: i) The paper proposes a novel approach to adaptive contrastive learning in collaborative filtering by adopting an adversarial approach. The proposed Adversarial InfoNCE loss addresses the limitations of existing methods and allows for the fine-grained assignment of hardness to each negative user-item pair, which enhances the recommender's generalization ability and empowers top-K recommendations via informative negative sampling. ii) The paper provides innovative theoretical insights into the benefits of adversarial hardness learning. It shows that the proposed hardness scores are correlated with the out-of-distribution problem of recommendation, and can thereby enhance the generalization ability of recommenders. iii) The study of hardness gives an in-depth analysis on the learned hardness scores, which uncovers the importance of learning correct hardness for the massive negative samples without observations. Weaknesses: i) The proposed method can be viewed as an adaptive SSL method for recommendation [1-2]. Also, it can be a learnable negative sampling approach [3-4]. Literature review (baseline comparison would do better) should be done for these two very relevant research line. ii) Though a brief training cost experiment is given in the appendix. I would expect more detailed efficiency experiments or analysis to support the claim that the proposed AdvInfoNCE can serve as a foundation loss for future CF researches, as the proposed approach utilizes the adversarial training method. [1] Graph contrastive learning with adaptive augmentation [2] Automated Self-Supervised Learning for Recommendation [3] Personalized Ranking with Importance Sampling [4] AHP: Learning to Negative Sample for Hyperedge Prediction Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It would be better for the authors to clarify the relations between the proposed method and the related research lines I mentioned above. And I would expect the authors to give a more detailed efficiency analysis or experiments. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer $\color{purple}{\text{qmEP}}$** Thanks so much for your time and positive feedback! To address your concerns, we present the point-to-point responses as follows. We have carefully revised our paper, taking all your feedbacks into account. >**Comment 1: Clarify the relations** - "The proposed method can be viewed as an adaptive SSL method for recommendation. Also, it can be a learnable negative sampling approach. Literature review (baseline comparison would do better) should be done for these two very relevant research line." Thanks for your insightful comments. We acknowledge the importance of clarifying the relationship between our work and both the adaptive SSL method and learnable negative sampling approaches. Following your suggestion, we have carefully **revised the related work section** to provide a more comprehensive literature review that aligns our method with both mentioned paradigms. Additionally, heeding your advice, we have **incorporated two new baselines** - adaptive SSL method: AutoCF [1] and negative sampling approach: XIR [2]. The results are summarized in the table 1 provided: **Table 1: Two new baselines on Tencent** | | | $\gamma=200$ | | | $\gamma=10$ | | | $\gamma=2$ | | Validation | | :----: | :----: | :----------: | :----: | :----: | :---------: | :----: | :----: | :--------: | :----: | :--------: | | | HR | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG | NDCG | | AutoCF | 0.1560 | 0.0570 | 0.0317 | 0.1092 | 0.0399 | 0.0227 | 0.0797 | 0.0295 | 0.0172 | 0.0723 | | XIR | 0.1463 | 0.0538 | 0.0326 | 0.0936 | 0.0341 | 0.0211 | 0.0642 | 0.0245 | 0.0154 | **0.0883** | | AdvInfoNCE | **0.1600** | **0.0594** | **0.0356** | **0.1087** | **0.0403** | **0.0243** | **0.0774** | **0.0295** |**0.0180** | 0.0879 | We observe that: - Consistency in Superiority: AdvInfoNCE consistently showcases better performance over both AutoCF and XIR across various levels of out-of-distribution test sets. This demonstrates the robustness and generalization ability of our approach. - XIR's Specific Strength: XIR demonstrates enhanced performance particularly when faced with data exhibiting a long-tailed distribution. We attribute this advantage of XIR to adaptively achieve a more accurate estimation of the softmax distribution. - Impressive Performance for AutoCF: Surprisingly, in some metrics, AutoCF exceeds the performance of the second-best methods reported in our original submission. We believe the reason for AutoCF's success lies in its ability to perform masked subgraph augmentation automatically. These experiments have enriched our understanding of where AdvInfoNCE stands in the current research landscape. We have incorporated these insights into our revised manuscript. >**Comment 2: Efficiency analysis** - "Though a brief training cost experiment is given in the appendix. I would expect more detailed efficiency experiments or analysis to support the claim that the proposed AdvInfoNCE can serve as a foundation loss for future CF researches, as the proposed approach utilizes the adversarial training method." We highly appreciate your insightful suggestions. Following your suggestions, we have **analyzed the time complexity** of our proposed AdvInfoNCE method and, for a comprehensive understanding, compared it with two standard losses in CF (BPR and InfoNCE) and other Contrastive Learning-based CF methods (CCL, BC loss, Adap-$\tau$). Let's define some notations for clarity: - $n$: Number of users - $d$: Embedding size - $N$: Number of negative sampling - $M = |\mathcal{O}^{+}|$: Number of observed interactions - $B$: Batch size - $N_{b}$: Number of mini-batches within one epoch Clearly, The total time complexity of one epoch without backward propagation for BPR loss is $O(N_{b}Bd)$. For AdvInfoNCE, the similarity calculation for one positive item along with N negative items costs $O((N+1)d)$, and the hardness calculation costs $O(Nd)$. This implies that the training cost of AdvInfoNCE, though slightly higher than that of BPR loss, shares the same complexity order with InfoNCE. Detailed computational costs of sampling based losses are summarized as follows: **Table 2: Time complexity comparison** | InfoNCE | CCL | BC Loss | Adap-$\tau$ | AdvInfoNCE | | :-----: | :-----: | :-----: | :-----: | :-----: | | $O(N_{b}B(N+1)d)$ | $O(N_{b}B(N+1)d)$ | $O(N_{b}B(N+1)d)$ | $O(N_{b}B(N+1)d+(M+n)d)$ | $O(N_{b}B(N+1)d)$ | [1] Automated Self-Supervised Learning for Recommendation. 2023 [2] Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever. 2022 --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I believe this paper is a clear accept and have raised my rating. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We appreciate your acknowledgment of our efforts in addressing the concerns. Your insightful comments have been instrumental in enhancing the quality of our work.
Summary: This paper studies contrastive learning (CL) in collaborative filtering (CF) for top-k recommendation. In particular, it focuses on the CF-tailored challenges for CL, and then presents adversarial infoNCE (AdvInfoNCE) loss. This loss dynamically assigns hardness to negative instances and incorporates a fine-grained ranking criterion to improve the CF recommender’s generalization ability. Furthermore, this paper highlights theoretical properties of AdvInfoNCE. Experiments on both synthetic and real-world datasets are done to show the effectiveness of this loss, especially in out-of-distribution scenarios. Strengths: 1. The motivation of considering fine-grained hardness-aware ranking criteria is clear and reasonable. Technically, it is insightful and novel to transfer the distinguish between false and hard negatives into a margin learning problem. 2. As proved in Sec 3.3, the adversarial training framework of AdvInfoNCE is natural and somehow equivalent to solve a distributionally robust optimization (DRO) problem. I appreciate this theoretical guarantee that can endow a AdvInfoNCE-trained CF models with better generalization ability. 3. The experiments are done on four datasets with two CF base models, which are sufficient to demonstrate the effectiveness of the proposed loss. Moreover, the selected baselines are quite new, including some recent and strong works. 4. This proposed loss seems to be a general loss that can be applied in general recommendation models. Weaknesses: 1. The hardness is denoted as $\delta$ in Sec 3.2, while related to $p(j|(u,i))$. This is not explained clearly. More clarifications are needed here. 2. In Line 214, the limitation is stated as the ‘training instability’, which is not empirically shown in the experiments, such as indicated by ‘training loss variance’. It would be better to discuss more about this instability. 3. Although the proof of Theorem 3.1 seems correct, the DL-divergence In Line 195 misses a minus sign. Please double check and fix it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The hardness is denoted as $\delta_{j}^{(u)}$ first and then $\delta_{j}^{(u,i)}$, which seems confusing. Is it related to a specific user $u$ or a user-item pair $(u,i)$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Please refer to the weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer $\color{orange}{\text{PLbw}}$** We thank the reviewer for the thorough and valuable feedback. To address your concerns, we present the point-to-point responses as follows. We have carefully revised our paper by taking into account all your suggestions. Looking forward to more discussions with you. >**Comment 1: Clearer explanation** - "The hardness is denoted as $\delta$ in Sec 3.2, while related to $p(j|(u,i))$. This is not explained clearly. More clarifications are needed here." Thank you for highlighting the need for clearer explaination concerning the relation between hardness and the negative sampling probability. Following your suggestion, we have **revised the methodology section** in the manuscript. For the sake of addressing your question directly, we would like to provide a detailed elaboration here: In our paper, specifically at Line 195, the hardness $\delta_j^{(u,i)}$ for a negative item $j$ with respect to observed interaction $(u,i)$ is defined as: $\delta_j^{(u,i)} \doteq \log(|N_u|\cdot p(j|(u,i)))$. Here $|N_u|$ denotes the total count of negative items for user u. $p(j|(u,i))$ signifies the likelihood of picking item $j$ as a negative item given the observed interaction $(u,i)$. By defining the hardness in this manner, we closely relate our hardness to a particular strategy used for negative sampling. Moreover, as outlined in Theorem 3.1, optimizing AdvInfoNCE loss is equivalent to solving a Distributionally Robust Optimization (DRO) problem focusing on the negative sampling strategy. For instance, in scenarios where uniform negative sampling is employed (i.e., $p(j|(u,i)) = \frac{1}{|N_u|}$), the hardness value $\delta_j^{(u,i)}$ always equals 0. This offers an insight: InfoNCE is a special case of AdvInfoNCE, where incorporates the uniform negative sampling. As a result, with no difference of $\delta_j$, InfoNCE is unable to differentiate between true negatives and false negatives, which is consistent with our earlier illustration in Figure 1 (a). > **Comment 2: More discussion about instability** - "In Line 214, the limitation is stated as the ‘training instability’, which is not empirically shown in the experiments, such as indicated by ‘training loss variance’. It would be better to discuss more about this instability." Thanks for your valuable comments. The "instability" here pertains to the instability and inconsistency of hyperparameters (i.e., the learning rate and epochs for adversarial training) ranges across different datasets. As reported in Table 9, the optimal adversarial learning rate varies largely across different datasets (e.g., 1e-2 for Coat and 5e-5 for Tencent). Moreover, Figure 3 indicates that increasing the number of epochs of adversarial training without constraints leads to a significant decline in in-distribution performance. Additionally, when exploring the role of negative sampling numbers, we found that these two hyperparameters also require delicate adjustments for varying numbers of negative samples. Inappropriate learning rates for adversarial training can result in varying degrees of hardness mining, consequently leading to inconsistent and suboptimal performance. > **Comment 3: missing a minus sign** - "Although the proof of Theorem 3.1 seems correct, the DL-divergence In Line 195 misses a minus sign. Please double check and fix it." Thank you for your carefully reading and notification. You are right, and there is a missing minus sign in the proof of Theorem 3.1. We sincerely apologize for the oversight and we have **rectified the error**. To prevent such errors from cropping up again, we have conducted a thorough review of all the mathematical expressions and formulas throughout the paper, ensuring their accuracy and consistency. We truly appreciate your attention to detail. Thank you for helping us improve the precision and clarity of our work. > **Question 1:** - "The hardness is denoted as $\delta_j^{(u)}$ first and then $\delta_j^{(u,i)}$, which seems confusing. Is it related to a specific user u or a user-item pair (u, i)?" Thank you for highlighting this discrepancy in notation. You are right. The term $\delta_j^{(u,i)}$ denotes the hardness of a negative item $j$ with respect to observed interaction $(u,i)$. In the methodology section of our original paper, specifically around line 150, we simplified the notation to focus primarily on a single user-item pair $(u,i)$. This was done to facilitate ease of understanding for the reader. As we progressed further into the methodology, post line 177, we expanded our considerations to account for all observed interactions, thereby introducing the notation $\delta_j^{(u,i)}$. We acknowledge that this transitional shift in notation might cause confusion, and we truly appreciate your keen attention to detail. We will consider including an explanatory note or rephrasing to make this transition clearer in future versions. --- Rebuttal Comment 1.1: Comment: Thank you for the classifications. My concerns have been resolved. BTW, I specifically appreciate the illustrative examples provided, which clearly present how the proposed model works. Given this, I increase my score slightly. --- Reply to Comment 1.1.1: Title: Thanks reviewer! Comment: We sincerely thank you for recognizing our efforts in rebuttal. We appreciate your decision to increase the score. Your feedbacks have been invaluable to our work.
Summary: Current losses for collaborative filtering struggle to handle the issue of unobserved user-item pairs. Typically, the approach is to treat unseen pairs as negatives while seen pairs as positives, but this is somewhat problematic because unseen pairs could just be unobserved positives. The authors propose an adversarial InfoNCE-based loss that claims to address this problem in collaborative filtering. This loss works by minimizing the InfoNCE loss given that we have adversarially learned weights for the negative samples. They give a theorem that shows this proposed loss can be interpreted as a distributionally robust optimization problem. Finally, they give some empirical results showing the efficacy of their method over other CF baselines. Strengths: 1. This paper provides a method that appears to give solid gains across various collaborative filtering tasks. 2. They do make an attempt to try to interpret what the \delta (adversarially learned parameters) in their method are doing. 3. The graphs for the results and the pictures explaining the methods are good and helped me with understanding. 4. The authors try to tackle a hard problem: it is hard to think about how to best utilize unseen pairs in collaborative filtering due to their unseen nature. 5. The method is fairly novel as a nontrivial extension to InfoNCE to the collaborative filtering setting via adversarial training. Weaknesses: 1. I don't understand the role of the adversarial variables in the algorithm. In figure 2a there was some attempt at interpreting the values of the deltas, but it still does not make sense. I hope the authors can explain the role of the variables better. 2. I think the terminology of "hard negative" is confusing, because typically in the self-supervised learning literature people call negatives that are near the decision boundary "hard negatives". However, in this paper hard negatives are the opposite: negatives that are far away from the positives pair. I suggest rewriting the paper to make the message more clear. 3. In general, the paper is hard to understand and has many grammatical errors. The authors should fix this to make the paper easier to read. 4. From what I can understand, the loss should simply make the deltas as large as possible (positive) to increase the loss value given that there are no constraints (aside from the number of epochs trained, I guess). I have concerns about the usefulness and stability of this algorithm. 5. In theorem 3.1, we assume that the deltas imply a probability distribution. Is this true? As we train do the deltas for a user i add up to |N_i|? I'm not sure there is a constraint there enforcing this. In that case I'm not sure the theory applies to the algorithm as-is. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: My questions are in the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Seems to be sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer $\color{red}{\text{kSe8}}$** We appreciate your comments, some of which inspires us to greatly improve our paper. Below we provide the point-to-point responses to address your concerns and clarify the misunderstandings of our proposed method. If you have additional questions, we would be pleased to discuss them with you. > **Comment 2: Terminology confusing** - "I think the terminology of "hard negative" is confusing, because typically in the self-supervised learning literature people call negatives that are near the decision boundary "hard negatives". However, in this paper hard negatives are the opposite: negatives that are far away from the positives pair. ..." We agree that hard negatives are negatives lying close to the decision boundary, making them hard to classify. However, a more capable model, exhibiting superior generalization, aims to reshape the representation space by pushing those hard negatives far from the decision boundary [1,2]. Figure 1 in our original submission offers a visual explaination of this concept. It depicts the feasible zone for hard and false negatives of both InfoNCE and AdvInfoNCE. With the application of AdvInfoNCE, hard negatives are purposefully distanced from the positive pairs from a feasible zone perspective. By adjusting the feasible zone using $\delta$ in AdvInfoNCE, we effectively re-rank the negatives in representation space. We are confident in our choice of terminology and value further discussion if any points require elaboration. > **Comment 1: Role of $\delta$** - "I don't understand the role of the adversarial variables..." We value your comments. To better clarify the role of $\delta$ and its learning process for hard and false negatives, we **conducted additional illustrative examples**. The illustrative example should highlight two points. 1. AdvInfoNCE could effectively **identify the false and hard negatives** via learnable $\delta_j$ (max-stage). 2. $\delta$ helps to **refine the item ranking** compared to InfoNCE (min-stage). - **Identification of False and Hard Negatives:** On the Tencent training data, we trained both the InfoNCE and AdvInfoNCE models. Interactions unobserved during training but present in testing are labeled as false negatives (FN), otherwise true negatives (TN). Based on our theoretical assumption, for a FN, we wish a more relaxed constraint, leading to $\delta < 0$. To validate this assumption, we introduce the 'FN identification rate', a metric determining the proportion of FNs where $\delta < 0$. As Fig 6 in one-page uploaded pdf shows, our observations are consistent with our claim. As training proceeds, the FN identification rate increases, capping at nearly 70%. This reveals AdvInfoNCE's capability to identify approximately 70% of FNs in test set. We attribute the superior performance of AdvInfoNCE over InfoNCE to this gradual identification. - **Refinement of Item Ranking:** we randomly draw two users along with their FN and TN items, subsequently retrieving their associated $\delta$ values, ranking positions, and cosine similarities, as demonstrated in Fig 7. Consistent with our prior findings, for an FN, AdvInfoNCE generally assigns a negative $\delta$. This negative $\delta$, indicating a more lenient feasible zone constraint, enables the recommender to achieve higher cosine similarity. This, in turn, escalates the FN's ranking. For instance, as Fig 7(a) shows, given $\delta=−0.7887$, AdvInfoNCE elevates an FN from the 55th to a commendable 5th position. Conversely, for a TN, AdvInfoNCE leans towards a positive $\delta$, narrowing the feasible zone, thus distancing it from positives. An exemplary case is the TN $j_{6543}$ in Fig 7(a), where AdvInfoNCE, upon learning its $\delta = 1.1921$, declines its rank from 257th to 4587th. Such real-world cases attest to $\delta$'s role in fine-tuning recommendation ranking. In a word, for a specific u, the learnable $\delta_j$ measures the hardness of item j and further frame a fine-grained ranking criterion. > **Comment 3: hard to understand** -"In general, the paper is hard to understand ..." Thanks. We thoroughly proofread the manuscript and sincerely hope that our revised version will provide a smoother reading experience, aligning with the feedbacks we've received from other reviewers. > **Comment 4 & 5: Constraint of algorithm** - "From what I can understand, the loss should simply make the deltas as large as possible (positive) ..." "In theorem 3.1, we assume that the deltas imply a probability distribution. Is this true? ..." Thanks for your feedback. While we indeed delineate the constraints of AdvInfoNCE in various parts of our original manuscript, such as in lines 180, 187, 199-204, 547 and Eq 9, we recognize the need to highlight them more prominently. We define $\delta_j^{(u,i)}$ as $\log(|N_u|\cdot p(j|(u,i)))$, where $p(j|(u,i))$ denotes the probability of sampling negative item $j$ for observed interaction $(u,i)$. This signifies that the summation over all j for $p(j|(u,i))$ equals 1. Moreover, the bounds for $\delta_j^{(u,i)}$ are strictly set as $(\log(1-|N_u|\epsilon), \log(1+|N_u|\epsilon))$, which can be found in Eq 9. To provide an empirical insight into the $\delta$ value, we list its mean and standard deviation in our experiments during training: (-0.0, 0.0003), (-1e-4, 0.0164), (-0.0028, 0.0733), (-0.0128, 0.1549), (-0.0353, 0.2515), (-0.0747, 0.3572), (-0.1343, 0.4676). Noted that, the mean of $\delta$ is equivalent to $- D_{KL}(P_0||P)$, as stated in Line 195. [1] ArcFace: Additive Angular Margin Loss for Deep Face Recognition [2] Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering --- Rebuttal 2: Title: Follow-up discussion Comment: Thank you for taking the time to review our paper. We appreciate your feedback and hope our response do address your concerns, especially regarding the terminology and the algorithm. We thus do hope our clarification of this main concern does help you reassess our paper. If you have additional concerns, we would be more than happy to provide additional clarification. Thank you for your attention. --- Rebuttal 3: Title: I raise my score. Comment: Based on the rebuttals given by the authors and other reviews, I revise my score upwards. I believe the findings in this paper would be a valuable contribution to the conference. I have one more question. Why should we expect that a hard negative more likely correspond to a true negative at test time (as shown in Figure 7)? I can see that the experiment results seem to support this, but why should this be intuitively true? --- Rebuttal Comment 3.1: Comment: We would like to express our sincere appreciation for your review and for taking the time to reconsider our work based on the provided rebuttals. For your question regarding the hard negatives, our response is on the way. Thanks again. --- Rebuttal Comment 3.2: Title: Response to new questions Comment: We appreciate you posing this insightful question regarding the relationship between true negatives and hard negatives. At its core, our hypothesis in AdvInfoNCE is grounded in this fundamental assumption about the relationship between true negatives and hard negatives. Allow me to elucidate our rationale: 1. **Definition of Ture Negatives and Hard Negatives.** - **True Negatives:** Within the recommendation framework of offline testing, negatives denote interactions absent from the training phase. While true negatives denote items that absent from both the training and testing phase and user intrinsically dislike, which is unknown. The ultimate goal of offline recommendation is to discern potential interactions (positives) for testing based on the understanding gained from fitting the training data. Equivalently speaking, offline recommendation testing is essentially to identify which items are true negatives: items that, even exposed to the user, are unlikely to be clicked on. In other words, the recommender aims to avoid such true negatives appearing in the top rankings of the recommendation system. - **Hard Negatives** learned by AdvInfoNCE: Hard negatives are interactions that are assigned a positive hardness value $\delta$ in AdvInfoNCE. Such a value suggests a narrowing of the feasible zone, distancing the interaction's representations from positive. In essence, AdvInfoNCE de-prioritizes these hard negatives in ranking. Hence, we expect the AdvInfoNCE can successfuly identify the true Negatives in dataset as its defined hard negatives. 2. **Hard negative mining justification.** Here, we also want to elucidate why we view the process of identifying true negatives as akin to hard negative mining. We define an item with $\delta > 0$ as a hard negative based on the gradient analysis presented in Appendix B.3. The gradients associated with the negative item j are proportionally linked to $\exp(\delta_j)$. In other words, for $\delta_j >0$, the recommender exhibits more attention to this item by a factor of $\exp(\delta_j) >1$. This trait aligns with the concept of hard negatives in hard negative mining. I hope this response offers a clearer, more concise understanding. Let me know if further clarifications are required.
Rebuttal 1: Rebuttal: We are delighted to see the contributions of our paper have been acknowledged by the majority of the Reviewers. Specifically, we appreciate the Reviewers' recognition of our motivation, theoretical analysis ($\color{blue}{\text{ikze}}$, $\color{orange}{\text{PLbw}}$, $\color{purple}{\text{qmEP}}$, $\color{green}{\text{1hVj}}$), and novelty ($\color{blue}{\text{ikze}}$ , $\color{red}{\text{kSe8}}$, $\color{green}{\text{1hVj}}$). We appreciate all the reviewers for their valuable comments and suggestions. This helped improve our submission and better strength our claims. Taking into account suggestions of Reviewers, we have summarized the updates to the paper as follows: - **More detailed explanations.** Addressing the concerns raised by Reviewers $\color{red}{\text{kSe8}}$ and $\color{green}{\text{1hVj}}$, we have incorporated **two additional illustrative experiments** on the dynamic evolution of the fine-grained hardness $\delta_{j}$ and case studies explaining the effect of $\delta_{j}$. - **Experiments on two collaborative filtering backbones.** Following the suggestions of Reviewer $\color{blue}{\text{ikze}}$, we have conducted experiments on **two new CF backbones** to validate the generalization ability of AdvInfoNCE. - **Comparison experiments.** In response to Reviewers $\color{purple}{\text{qmEP}}$ and $\color{green}{\text{1hVj}}$, we have added literature reviews and **four supplementary comparison experiments** in the fields of negative sampling, adaptive SSL, and recommendation debiasing. - **Details about AdvInfoNCE.** We have incorporated detailed discussion about AdvInfoNCE, including hyperparameters selection, instability and computational complexity, to address the concerns of Reviewers $\color{blue}{\text{ikze}}$, $\color{orange}{\text{PLbw}}$ and $\color{purple}{\text{qmEP}}$. We have tried our best to address the main concerns raised by reviewers and we hope that these improvements will be taken into consideration. We also present the point-to-point responses for each reviewer below. Pdf: /pdf/fc38eb1d4fd19ec4d6ed59c6f527dcc44b115df8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a principled AdvInfoNCE loss for CF methods to improve generalization ability. It utilizes a fine-grained hardness-aware ranking criterion to assign weights for unobserved user-item interactions. In this way, it can enable better distinction between different negative, thus mitigating the inductive bias in CF-based methods. It provides theoretical proof of the effectiveness of AdvInfoNCE loss and the experimental results compared with other popular loss used in recommenders look promising. Strengths: It is well-written and easy to follow. Good motivation of improving the generalizability of CF-based methods. It provides thereotical guarantees for the loss design and conducts comprehensive analysis on the effectiveness of its method. Experimental results compared with other popular functions adopted in CF models look promising. Code is open. Weaknesses: Experiments can be more extensive. The results on MF and LightGCN look promising. But I think it would be more convincing if the authors can consider more CF-based backbones like MultVAE [1] and DGCF [2]. [1] Liang et al. Variational Autoencoders for Collaborative Filtering. 2018 WWW. [2] Wang et al. Disentangled Graph Collaborative Filtering. 2020 SIGIR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This paper proposes a novel constrastive loss for fine-grained ranking of negative samples to improve the generalizability of CF-based models, which is a significant contribution. It provides comprehensive theoretical analysis and experiments look promising. I just have minor concerns about the technical details. - Do you attempt other similarty measurements for calculating the hardness and whether the choice of similarity calculation matters? - What is the rate of negative samples over all the unobserved interactions used in your method and is there any result of using different negative sampling rate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: More CF-based backbones can be considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer $\color{blue}{\text{ikze}}$** We sincerely thank you for your time and valuable comments. Your main suggestions about considering additional CF-backbones help us substantiate wide applicability of AdvInfoNCE. > **Comment 1: Additional CF-based backbones** - "Experiments can be more extensive. ..." Thanks for your great suggestions! We fully agree that considering a broader range of CF-based backbones will better showcase the applicability of AdvInfoNCE loss. While the rebuttal period is time-constrained, we have incorporated the AdvInfoNCE loss to both a GCN-based CF backbone: UltraGCN [3], and a VAE-based CF backbone: VGAE [4]. We deeply appreciate your suggestion of considering DGCF [2] and MultVAE [1]. As outlined in the DGCF paper, it fundamentally emerges as a special case of LightGCN under the multi-intent assumption. Given the extensive experiments we have already conducted with LightGCN, we chose UltraGCN which offers an ultra-simplified formulation beyond LightGCN. As for MultVAE, it primarily takes into account the user-by-item interaction matrix instead of traditional user/item embeddings. Adapting AdvInfoNCE to MultVAE requires re-defing interaction-wise negative sampling and restructuring the entire training pipeline, which we leave for future work. What we accomplished during the rebuttal period was to demonstrate how AdvInfoNCE can be applied to another VAE-based backbone, VGAE. To demonstrate the applicability of AdvInfoNCE under UltraGCN and VGAE, we show the results on Tencent in Table 1. Clearly, AdvInfoNCE boosts the recommendation performance of UltraGCN and VGAE in terms of various OOD settings by a large margin. More detailed analyses can be found in our revision paper in Appendix. **Table 1: Overall performance for UltraGCN and VGAE backbones** | | | $\gamma=200$ | | | $\gamma=10$ | | | $\gamma=2$ | | Validation| | :-------------: | :------: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |:-----: | | | HR | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG |NDCG| | UltraGCN | 0.0930 | 0.0343 | 0.0190 | 0.0567 | 0.0215 | 0.0119 | 0.0400 | 0.0157 | 0.0095 | 0.0682 | |UltraGCN+ InfoNCE | 0.1436 | 0.0519 | 0.0303 | 0.0896 | 0.0324 | 0.0189 | 0.0617 | 0.0227 | 0.0135 | 0.0842 | |UltraGCN+ AdvInfoNCE | $\underline{0.1538}$ | $\underline{0.0569}$ |$\underline{0.0338}$ | $\underline{0.1025}$ | $\underline{0.0380}$ | $\underline{0.0227}$ |$\underline{0.0726}$ | $\underline{0.0276}$ | $\underline{0.0168}$ | 0.0883 | | VGAE + InfoNCE | 0.1482 | 0.0536 | 0.0315 | 0.0923 | 0.0338 | 0.0202 | 0.0640 | 0.0237 | 0.0141 | 0.0823 | | VGAE + AdvInfoNCE | **0.1588** | **0.0589** | **0.0353** | **0.1069** | **0.0395** | **0.0239** | **0.0778** | **0.0296** | **0.0182** | 0.0871 | > **Question 1: Different similarity measurements** - "Do you attempt other similarty ...?" Thank you for highlighting this question. In light of this, we **conducted new experiments** evaluating both inner product and cosine similarity measurements for hardness calculation. As indicated in Table 2, the choice between these measurements doesn't introduce significant discrepancies in performance. **Table 2: Varying similarity measurements** | | | $\gamma=200$ | | | $\gamma=10$ | | | $\gamma=2$ | | Validation | | :----------------------: | :--------: | :----------: | :--------: | :--------: | :---------: | :--------: | :--------: | :--------: | :--------: | :--------: | | | HR | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG | NDCG | | AdvInfoNCE-cosine | 0.1561 | 0.0581 | 0.0346 | 0.1046 | 0.0389 | 0.0237 | 0.0750 | 0.0286 | 0.0175 | **0.0881** | | AdvInfoNCE-inner product | **0.1600** | **0.0594** | **0.0356** | **0.1087** | **0.0403** | **0.0243** | **0.0774** | **0.0295** | **0.0180** | 0.0879 | > **Question 2: Different negative sampling rate** - "What is the rate ...?" Your point on negative sampling rate is noteworthy. The detailed information of negative samples on different datasets can be found in Appendix (Table 9). As expounded in Theorem 3.1, our AdvInfoNCE design capitalizes on obtaining high-quality hard negative samples from a distribution perspective. Theoretically, increasing the number of negative samples should yield a more favorable negative distribution. However, the trade-off with computational expense led us to adopt 128 negative samples for all Tencent baseline methodologies. In response to your query, we evaluated two other rates (64 and 256 samples) as represented in Table 3. Our findings consistents with our initial thinking: higher negative sampling rates do offer performance enhancements for AdvInfoNCE. **Table 3: Varying Number of Negative Sampling on Tencent** | | | $\gamma=200$ | | | $\gamma=10$ | | | $\gamma=2$ | | Validation | | :--: | :--: | :----------: | :--: | :--: | :---------: | :--: | :--: | :--------: | :--: | :--------: | | | HR | Recall | NDCG | HR | Recall | NDCG | HR | Recall | NDCG | NDCG | | 64 | 0.1513 | 0.0563 | 0.0333 | 0.1006 | 0.0373 | 0.0225 | 0.0708 | 0.0269 | 0.0164 | 0.0854 | | 128 | 0.1600 | 0.0594 | 0.0356 | 0.1087 | 0.0403 | 0.0243 | 0.0774 | 0.0295 | 0.0180 | 0.0879 | | 256 | **0.1642** | **0.0609** | **0.0367** | **0.1125** | **0.0419** | **0.0253** | **0.0815** | **0.0310** | **0.0189** | **0.0889** | [1] Variational Autoencoders for Collaborative Filtering. 2018 [2] Disentangled Graph Collaborative Filtering. 2020 [3] UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. 2021 [4] Variational Graph Auto-Encoders. 2016 --- Rebuttal Comment 1.1: Comment: Thank you for the efforts. My concerns have been addressed and I will maintain my rating.
null
null
null
null
null
null
Learning Layer-wise Equivariances Automatically using Gradients
Accept (spotlight)
Summary: Edit: Rating updated from 6 to 7 after rebuttal. The goal of the paper is to learn an interpolation between non-equivariant and equivariant models. The authors introduce different convolutional and non-convolutional linear layers, optionally being sparsified via factorizations or a smooth spatial basis. The basic idea is to define the model as a linear combination of translation-equivariant and non-equivariant layers, and to optimize their relative contribution in order to select whether the final model is equivariant or not. This is achieved by posing different Gaussian priors on their parameters, whose widths constitute hyperparameters to be optimized via Bayesian model selection. This model selection differs from prior work, which tuned the hyperparameters explicitly. Bayesian model selection requires the evaluation of a marginal likelihood term, which is infeasible. This is addressed via the Laplace approximation of the parameter posterior proposed by Immer et al. (2022). As this would still require the computation of a Hessian matrix which scales quadratically in the number of parameters, a Kronecker factorization approximation is used. Experiments on image classification datasets show that the interpolated models outperform non-equivariant models and, on tasks with broken equivariance, also strictly equivariant models. The latter are, in turn, performing slightly better on strictly equivariant learning tasks. An ablation to a pure MAP optimization baseline shows that the model selection approach results in significant gains. Results for further groups beyond translations are discussed in the appendix and briefly evaluated in the experimental section. Strengths: The paper combines ideas from prior work and demonstrates a superior performance compared to these baselines. It relies on a Bayesian approach and shows how it can be made feasible via approximations despite the analytical intractability and high dimensionalities of parameter spaces. Automatically learning hyperparameters is certainly an improvement over manual tuning. The presented empirical evidence shows that the model selection does indeed select equivariant or non-equivariant layers depending on the symmetries of the learning task. Weaknesses: The main downside of the approach is that it does not really learn equivariant models from scratch, but rather learns to select between pre-specified models with different levels of equivariance. However, this downside is shared by a line of prior work, which the current approach improves upon. I am also not entirely convinced that learning such selections is practically relevant since the appropriate levels of equivariance groups are usually known a-priori or become evident when comparing an equivariant model against a non-equivariant baseline. The experiments suggest that using strictly equivariant models is working better in cases where the desired equivariance group is indeed known a-priori. Instead of describing general equivariant mappings, the main paper considers only conventional convolutions, i.e. translation equivariance. More general results are discussed in the supplementary material and briefly evaluated in the last experimental section. It would have been nice to have a more general formulation in the main paper. It would also have been interesting to see how the method scales to multiple symmetry groups at once. For instance, one could consider all 2^3 (i.e. exponentially many) combinations of using translations, rotations and reflections. The explanation of the method could also be more clear. I had to re-read section 3 and the exact nature of the hyperparameters, introduced in section 3, remained vague till section 4.2. It would have been easier to just mention model selection of hyperparameters in the intro and moving the current section 3 between sections 4 and 5. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The authors state that "adding symmetries does not directly improve training losses that rely on training fit", however, many papers on equivariant models show their improved convergence rate and final loss when being fitted. Could this statement be clarified and supported more explicitly with evidence? The notation of the limit in equation 9 does not seem to make sense. I guess that the authors just want to say that the implication follows given $\sigma^2=0$? I found the notation of the feature map's domain as $\mathbb{Z}^3$ somewhat confusing and would rather write $[0,C]\times \mathbb{Z}^2$ (spatially supported on $[0,X]\times[0,Y]$). More questions/suggestions are found in the "weakness" section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are not explicitly discussed. I do not have ethical concerns Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and help to improve the paper. > learns to select between pre-specified models with different levels of equivariance. However, this downside is shared by a line of prior work + scales to multiple symmetry groups at once The approach is general in that it can work with general groups, as long as they can be differentiably parameterised. Learning symmetries using training data only from simple affine transformations is still actively investigated [1, 2, 3], and we extend the parameterisations to allow for layer-wise equivariances and provide concrete improvements to the scalability of parameterisations. Parameterising distributions over other groups and scaling to multiple symmetry groups is an interesting avenue for future work, but orthogonal to enabling differentiably learning of such parameterisations, which is what our paper focuses on. > using strictly equivariant models is working better in cases where the desired equivariance group is indeed known We agree with the reviewer that when a strict global symmetry is known a-priori or self-evident, it would always be better just to build it into the model. However, in some cases, the symmetry might not be known, specified, or should not be strictly enforced (think 6’s and 9’s in MNIST under rotational symmetry). Finding the right symmetry or optimal balance between equivariant and non-equivariant layers becomes harder on more complex models and datasets, especially when this can also differ layer-wise. On image classification tasks where equivariance seems very desirable, the standard MAP objective cannot select and ignore the non-equivariant pathways. Yet, our proposed Diff Laplace objective based on approximate Bayesian model selection does allow symmetry learning and can automatically select the most relevant symmetry, becoming largely equivariant on this task. This demonstrates the ability to learn relevant layer-wise symmetries from training data automatically. >discussed in the supplementary material and briefly evaluated in the last experimental section We will follow the suggestion by the reviewer to move the description of general group equivariant mappings to the main text. > moving the current section 3 between sections 4 and 5. We agree with the reviewer and will adopt the suggestion to move Sec. 3 between Sec. 4 and Sec. 5 to make the paper clearer. > The authors state that "adding symmetries does not directly improve training losses that rely on training fit", however, many papers on equivariant models show their improved convergence rate and final loss when being fitted. Could this statement be clarified and supported more explicitly with evidence? There is significant evidence that MAP alone cannot learn hyperparameters, including symmetry constraints. This is supported by experiments performed in earlier work on invariance/equivariance learning [1,2,3], with explicit examples that MAP cannot learn symmetry constraints in Sec. 4.2 of [2] and App. C of [3]. Because of this reason, works in literature often require differentiated validation losses [5, 6], explicit regularisation [7], or RL outer loops [4]. We hope this shows that problems with the MAP objective are well-established facts. The reason that such a data-fitting objective does not encourage symmetry is that symmetries constrain the functions that a network can represent. Consequently, the regular objective that maximises train data fit will always prefer as little as possible equivariance (no symmetry, so σ=0), even when more equivariance would be preferable in terms of better generalisation on validation/test data. In some particular cases, as the reviewer mentions, it can happen that there is an improved training fit when using the right equivariance. However, this only occurs when model capacity is constrained. To properly compare two architectures, they need to be large enough for their performance not to be artificially restrained by a lack of size. This is where the difference in training fit disappears and where an additional measure of complexity needs to be used, which the Laplace approximation provides. We have seen similar claims about faster convergence rates in models with the right equivariance, and we share the reviewer's view that this is a phenomenon that _may_ allow point optimisation to favour architectures that generalise well. However, there are currently no papers that successfully demonstrate using convergence rates for architecture selection, and our attempts to use this have failed, similarly to [1,2,3]. We show that by using the proposed Diff Laplace objective using approximate Bayesian model selection, we can in fact automatically learn symmetries using training data. > notation of the limit in equation 9 Indeed. Eq.9 should be the implication that follows given $\sigma^2 = 0$. We will fix this. > I found the notation of the feature map's domain as somewhat confusing We thank the reviewer for pointing this out and adopting the suggested change in notation. [1] Wilk, Mark van der, et al. "Learning invariances using the marginal likelihood." NeurIPS 2018 [2] van der Ouderaa, Tycho FA, et al. "Learning invariant weights in neural networks." UAI 2022 [3] Immer, Alexander, et al. "Invariance learning in deep neural networks with differentiable Laplace approximations." NeurIPS 2022 [4] Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation strategies from data." CVPR 2019 [5] Lorraine, Jonathan, et al. "Optimizing millions of hyperparameters by implicit differentiation." AISTATS 2020 [6] Liu, Hanxiao, et al. "Darts: Differentiable architecture search." arXiv preprint arXiv:1806.09055 (2018). [7] Benton, Gregory, et al. "Learning invariances in neural networks from training data." NeurIPS 2020 Typos will be fixed. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed reply. My concerns are adequately addressed and I am happy with the promised updates in the paper. I updated my rating from 6 to 7.
Summary: While (group) convolutions encode strict symmetries into neural network architectures, this paper presents a method for representing flexible symmetry constraints and learning the degree of symmetry automatically (through marginal likelihood objectives). Their method builds on residual pathways to represent each NN layer as the sum of fully connected and convolutional layers, where the initialization variance hyperparameter of the FC weights controls the degree of equivariance and is automatically optimized through the marginal likelihood. They introduce a number of techniques to make this process tractable in practice: * The FC layer is factored along spatial dimensions and possibly sparsified * To efficiently compute a laplace approximation of the marginal likelihood, they use a KFAC approximation of the Hessian. The factorization/sparsification of the FC layers admit additional simplifications to the KFAC Hessian computation. The paper includes experiments comparing this method with plain FC or CNN architectures on synthetic and natural image datasets (MNIST and CIFAR-10), and show that it can learn to adjust the degree of equivariance to achieve good performance on these tasks. Analyzing the optimized hyperparameters shows that the method prefers to make earlier layers more equivariant and later layers less equivariant, in agreement with common architecture design principles. Finally, they show the ability to select between multiple symmetry groups (Conv and 90degree rotation GConv) on CIFAR10. Strengths: The paper introduces a number of technical innovations to tackle a very challenging problem, automatic symmetry discovery. Marginal likelihood optimization removes the need for validation datasets or handcrafted regularizers when learning symmetries (which were used by some prior work), but is difficult to compute efficiently at the scale of modern NNs. It also attempts to address a common problem of symmetry discovery work where, if you start with no symmetry assumptions (FC), you end up with an enormous number of parameters for high dimensional inputs which is not computationally feasible in practice. The authors show that factoring (along spatial dimensions) and sparsifying the FC layers can reduce the number of parameters and even admit more efficient marginal likelihood estimation. Weaknesses: The empirical results are limited and do not demonstrated that this method will be broadly applicable, in my opinion. The real datasets considered (CIFAR-10 and MNIST) are relatively simple and existing techniques (e.g., strong data augmentation or SSL + resnets or ViT) likely obtain much stronger results on these benchmarks without needing symmetry learning. Although the purported advantage of symmetry learning is that it can hopefully do better than human chosen symmetry constraints (like humans choosing data augmentations), the empirical results here don't show that. Although in principle the method is applicable to any symmetry group, the experiments seem to focus almost exclusively on translation invariance. Only 6.3 studies the case with 2 possible symmetries: translation (conv) and rotation (GConv). Is there a concern with scalability to more possible symmetries? Ideally, we would like the method to be able to learn any relaxed symmetries more generally without having to restrict the search space to one or two options (translation and rotation). Relatedly, much of the paper and in particular the spatial factorization seem particularly suited to image inputs (or similar input modalities). Would spatial factorization still work well with other data modalities, like graphs representing molecules? The experiments also seem to be focused on image inputs only. Related work: the experiments and aims of this paper are reminiscent to [Elsayed et al, 2020](https://arxiv.org/abs/2002.02959), which also aimed to learn related spatial symmetry by operating on a spectrum between locally connected and convolutional layers. Using locally connected layers can also be viewed as "sparsifying" the fully connected layer to decrease computational costs. It would be interesting to discuss the differences and similarities here, either in text or in terms of empirical results (or both). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some minor things: What is P in Eq 1, the number of parameters? Might've missed this. Why does the definition of fully connected preserve the spatial dimensions X and Y? In principle, I'd expect that a fully connected layer (or a conv layer) can change the spatial dimensions. Also, the closing bracket $[0,X$ is missing above Eq 2. Section 6.2: If I understand properly, isn't the optimal behavior just to ignore the fully connected pathway altogether and set its variance -> 0? [Post-rebuttal update]: I have read the author's response. They answer most of my questions. I largely maintain my original score and evaluation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss limitations of their method and the general line of work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and help to improve the paper. > empirical results are limited Even when strict symmetries are desirable, our method shows that such favourable architectures can be discovered automatically, whereas the standard MAP objective cannot. Our proposed method allows automatically learning layer-wise equivariances from training data through approximate Bayesian model selection. We have demonstrated this principle but agree with the reviewers that assessing performance on larger real-world datasets and models would be interesting. > method is applicable to any symmetry group, but experiments focus on translation invariance We describe the method for general groups and demonstrate symmetry selection between multiple groups in Sec. 6.3. Although some prior works have considered a more extensive range of groups (e.g. [1]), we have a greater focus on practical scalability of the parameterisations (Sec. 4) and puts a considerable amount of work in the objective function (Sec. 3, 5). In doing so, we have demonstrated the principle of automatically selecting the most relevant layer-wise equivariances from training data through Bayesian model selection. We agree that it would be interesting to consider more groups in future work. > scalability to more possible symmetries? Scalability to more symmetries likely depends on the exact problem set-up and considered groups and representations. Parameterising group symmetries is a research field in its own right and not the focus of this work. Nevertheless, the existing parameterisation of neural networks with many symmetries and regular group representations should be readily extendable to relaxed and learnable symmetries using the techniques presented in this paper. We see no direct concern with scalability to more symmetries. This is an interesting avenue for future research, but orthogonal to enabling learnable layer-wise symmetries from training data, which this work focuses on. > Would spatial factorisation still work well with other data modalities, like graphs representing molecules? Yes, the spatial factorisation should generalise to other data modalities as long as you can construct basis functions on the manifolds [2], which are also available for graphs [4]. > Related work: the experiments and aims of this paper are reminiscent to Elsayed et al, 2020, We greatly thank the reviewer for pointing this out and will include [3] in the related work. There are indeed some interesting parallels between the work and the parameterisation aspects (Sec. 4) of our work. Indeed, the motivation behind including spatial dependence is similar as well as recognising that "combining weights scales linearly with the number of pixels in the image, which may be large." There seem to be subtle differences in how both methods address this. For instance, [3] uses 'low-rank locally connected layers' factorisation between height and width dimension, whereas our factorisation factors between the full input and output dimensions. Further, our sparsification uses basis features which do not necessarily result in a rank-1 matrix spatially, whereas the low-rank factorisation of [3] will. > What is P in Eq 1, the number of parameters? Indeed, P is the number of parameters. We will make this clear from the text. > Why does the definition of fully connected preserve the spatial dimensions X and Y? This indeed differs slightly from the typical formulation of fully-connected layers that allow different input and output dimensions. We do this mainly for notational simplicity and to avoid running into the complexity of having different group structures on the input and output. We are still absolutely flexible regarding changing spatial dimensions but only do this through equivariant pooling (see App. F). For instance, instead of defining an FC layer with 2x smaller output dimensions, we simply use an FC without spatial downsampling followed by a 2x2 filter spatial pooling layer. We go through all this effort to ensure that equivariant paths are strictly equivariant and that our experimental set-up can verifiably show that our method actually learns layer-wise symmetries. This might be less important when applying the method in practice. > If I understand properly, isn't the optimal behaviour just to ignore the fully connected pathway altogether and set its variance -> 0? Yes. Equivariance on the image classification task is likely desirable, thus ignoring the fully connected pathway. We refrain from making any hard claims about optimality but generally agree with the reviewer. In line with theory, we observe that this “optimal behavior” cannot be learned with the standard MAP objective but can be learned with Diff. Laplace. Using the proposed objective, the network `becomes equivariant’ and largely learns to ignore the fully connected pathways. This demonstrates the ability of our method to select the most relevant symmetry from training data automatically. [1] Finzi, Marc, Gregory Benton, and Andrew G. Wilson. "Residual pathway priors for soft equivariance constraints." NeurIPS 2021 [2] Azangulov, Iskander, et al. "Stationary kernels and Gaussian processes on Lie groups and their homogeneous spaces i: the compact case." 2022 [3] Elsayed, Gamaleldin, et al. "Revisiting spatial invariance with low-rank local connectivity." ICML 2020. [4] Borovitskiy, Viacheslav, et al. "Matérn Gaussian processes on graphs." AISTATS 2021. Typos will be fixed.
Summary: This paper proposes a neural network architecture and gradient-based training algorithm for modeling approximately equivariant functions. The architecture builds upon residual pathway (Finzi et al., 2021), where each layer of network is parameterized as an additive combination of constrained equivariant path and unconstrained fully connected path. A general challenge for such architectures is that equivariance is not favored for fitting training data, so empirical loss minimization will likely result in unstructured solutions that do not generalize. While the residual pathway paper solves this by putting higher weight regularization on unconstrained path, the strength of regularization is a hyperparameter that requires search with validation data in principle. The main motivation in this paper is to learn the extent of equivariance in each layer only from training data in a single training run (online). For this, the authors adopt Bayesian model selection which allows gradient-based learning of hyperparameters from training data by maximizing marginal likelihood estimate, specifically chosen in this work to be Laplace approximation with structured Hessian approximation with KFAC. Given that, the major technical contributions of the paper are on (1) improving the parameter efficiency of residual pathway by introducing convolution on Lie groups as well as factorization and spatial sparsification based on standard exponential basis functions, and (2) specifying the extent of equivariance as hyperparameters controlling the priors placed on the parameters, and (3) deriving KFAC for the proposed parameterizations so that gradient-based learning of hyperparameters controlling the extent of equivariance is made possible through maximization of marginal likelihood estimate. The authors provide experiments mainly regarding discrete 2-dimensional translation symmetry, and demonstrate that the proposed algorithm can (1) learn partial symmetry when needed, (2) recover the standard architecture of convolutional stack postfixed by fully-connected layers, and (3) learn to select from multiple symmetry groups depending on task, solely from training data. Finzi et al., Residual Pathway Priors for Soft Equivariance Constraints (2021) Strengths: S1. Overall, I think this is a solid work that contributes towards solving a challenging and important problem of learning symmetry from training data by bridging (approximate) equivariant architecture design and Bayesian deep learning. I find no critical issue with originality, quality, clarity, and significance of the work; the writing is overall clear, the design of the algorithm is clearly motivated and presented, and the experimental results seem to support the main claims and motivations. Weaknesses: W1. In Table 1-3, in addition to test performance, it would be nice if I could see how the models (over)fit to training data (i.e., how they generalize) given that a major motivation of the work comes from the fact that equivariance is not encouraged when fitting training data but beneficial for generalization. W2. The algorithm is empirically demonstrated for discrete 2-dimensional translation and 90-degree rotation symmetries. While the authors argue that extension to more general groups is possible in principle, I find the empirical demonstration regarding the argument is limited compared to the residual pathway paper (Finzi et al., 2021) that provided comprehensive experiments regarding e.g., continuous orthogonal groups as well. Finzi et al., Residual Pathway Priors for Soft Equivariance Constraints (2021) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1. Considering the space of input and output feature maps, how is the group lifting (Appendix A) done in general when there is a set of M groups G1, …, GM (Appendix B)? Do you use direct product G = G1 x … x GM? This doesn’t seem straightforward, I might have missed something. Q2. Does the derivation of KFAC in Appendix C extend to other groups in a similar way to Appendix A? It seems so intuitively, but I would like to ask for a clarification from the authors. Q3. Do the layer indices 0-7 in Appendix D match each layer in Table 6? That is, can I regard the learned layer 6 in Appendix D to correspond to the layer that maps spatial dimension 4 x 4 to 2 x 2 in Table 6? Q4. In Table 6, it is written that the architecture is used for all experiments, and CONV layers are marked with kernel size (e.g., 3 x 3). How should one interpret this for S-CONV layers where, to my understanding, kernel size is not fixed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have partially addressed limitations of the work in Section 7; I encourage further clarifications of limitations that needs to be addressed in future work, if there are any. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and help to improve the paper. > In Table 1-3, in addition to test performance, it would be nice if I could see how the models (over)fit to training data. Thank you for raising this. We agree with the author that providing training performance is important given the motivation of learning symmetries by balancing data fit and model complexity through approximate Bayesian model selection. We can confirm that all models fit the training data well, as measured by a low negative log likelihood. We will add performance on the train set to the final version. > the authors argue that extension to more general groups is possible in principle, I find the empirical demonstration regarding the argument is limited The method is described for general groups and an experiment demonstrating symmetry selection between multiple groups is given in Sec. 6.3. Although some prior works have considered a more extensive range of groups (e.g. [2]), our work has a much more extensive focus on the scalability of parameterisations and we put a considerable amount of work in deriving an objective function that can learn symmetry, which is a focus rather than how to parameterise to many groups. We demonstrate the principle of automatically selecting the most relevant layer-wise equivariances from training data through Bayesian model selection. We agree that it would be interesting to consider more groups in future work. > Q1. How is group lifting done in general? Parameterising neural networks to obey group symmetries is an entire research field in its own right. The choice of parameterisation of the symmetry and its representation may vary depending on task-specific constraints. The formalisation of group lifting can be found in [3]. When combining rotation and translation, we follow regular representations similar to [1]. This means that if the network learns to ignore non-equivariant paths entirely, it will become exactly equivalent to standard G-CNN (e.g. P4CNN in [1]). > Q2. Does the derivation of KFAC in Appendix C extend to other groups in a similar way to Appendix A? In our case, yes, since we define filters on a group through base features defined in a vector space [4] and can therefore readily be extended to KFAC without issues. KFAC only needs to properly handle the weight-sharing induced by the group parameterisation, which we explain how to do in Sec. 5. We do note, however, that parameterising symmetries in neural networks is a research field in its own right. Different parameterisations might affect how easily KFAC can be derived. We will add this disclaimer. > Q3. Do the layer indices 0-7 in Appendix D match each layer in Table 6? Yes. The same architecture is used for all experiments and layer numberings are consistent. > Q4. In Table 6, it is written that the architecture is used for all experiments, and CONV layers are marked with kernel size (e.g., 3 x 3). How should one interpret this for S-CONV layers where, to my understanding, kernel size is not fixed? The kernel size of an S-CONV layer is fixed and kept the same (e.g. a 3 x 3 filter) as the CONV layer. The difference between both layers is that the number of parameters is decoupled from the kernel size. That is, by using basis features we can build an S-CONV layer with a 3 x 3 filter with fewer (e.g. 4) parameters, instead of the 9 required by CONV. A more extensive explanation of sparsifying filters through basis features can be found in [4]. We will make this more clear from the main text. [1] Cohen, Taco, and Max Welling. "Group equivariant convolutional networks." International conference on machine learning. PMLR, 2016. [2] Finzi, Marc, Gregory Benton, and Andrew G. Wilson. "Residual pathway priors for soft equivariance constraints." Advances in Neural Information Processing Systems 34 (2021): 30037-30049. [3] Kondor, Risi, and Shubhendu Trivedi. "On the generalization of equivariance and convolution in neural networks to the action of compact groups." International Conference on Machine Learning. PMLR, 2018. [4] van der Ouderaa, Tycho FA, and Mark van der Wilk. "Sparse Convolutions on Lie Groups." NeurIPS Workshop on Symmetry and Geometry in Neural Representations. PMLR, 2023. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the response. I recommend the authors to include the discussion regarding W2, Q1, Q2 in the final version of the paper. I have no further questions for now.
Summary: The work proposed an automatic way to learn equivariance in each layer by finding a balance between the equivariant layer and the unrestricted fully connected layer. Unlike the previous work on soft equivariance, the work proposed to learn the balance between them via Bayesian model selection using gradients. The work also proposes different parameter-reduction techniques and achieves better results in the conducted experiments. Strengths: 1. The work provides a technique for learning equivariance structure automatically through gradient 2. The work addresses the overparameterization of such soft equivariant models by factorization and sparsifications. 3. The paper is well-organized and clearly written. Weaknesses: 1. Evaluations: The empirical evaluations are conducted primarily on image data where the motivation for finding a balance between the equivariant and non-equivariant layers is not clear (except for the toy problem). Clearly, for image classifications, labels should not be affected by translation or rotation. For the toy problem, a comparison with Finzi et al. [2021a] is not provided. 2. The performance of regular equivariant architecture is not provided, which makes it difficult to measure the gain by the proposed method. Marc Finzi, Gregory Benton, and Andrew G Wilson. Residual pathway priors for soft equivariance constraints. Advances in Neural Information Processing Systems Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I am curious about the memory and time (training) required for the proposed model compared to the regular equivariant architecture. 2. What may be the reason for MAP performing worse than the proposed Diff. Laplace method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. > motivation for finding equivariance balance Symmetry discovery from data is a well-recognized task and is actively studied [1,2,5,6,7]. The key motivation is that the true symmetry may be unknown, not specified, or that the optimal balance of equivariant and non-equivariant layers is in fact not 'clear'. Examples of this already arise in simple tasks, such as 6’s and 9’s in MNIST or our translational dependence toy problem. Finding the right balance arguably becomes even more difficult on more complex data, architectures and when optimal symmetries differ per layer, which we focus on. As discussed in Sec. 6.2, common vision architectures deploy translational symmetries, but are often not strictly invariant globally due to fully connected layers at the end of the network. Our method paves the way to discovering such favourable architectures automatically from training data. Our work demonstrates the principle of automatically learning layer-wise equivariances through approximate Bayesian model selection for neural networks. Even where strict symmetry is most desirable, our method is able to ignore non-equivariant layers and `become equivariant’, whereas standard MAP objectives cannot. > performance of regular equivariant architecture is not provided All experiments include the performance of regular equivariant architectures denoted by 'CONV'. > memory and time (training) The proposed factorised F- and sparsified S- parameterisations improve both training time and memory easily by 2x at a negligible loss in performance. Tables 1-2 include the exact amounts of memory in terms of the number of parameters and notice active GPU in practice to be roughly proportional to these counts. All experiments can run on a single 24GB GPU. We will report the exact numbers in the final manuscript. The Diff. Laplace objective performs better than MAP but is ~1.2-2x slower in training. We would like to mention that we rely on modern linearised Laplace approximations, which are currently actively being studied and improved. For instance, concurrent work [4] shows a 10x improvement of the Laplace approximation and should also readily be applicable to our setting. > MAP performing worse than the proposed Diff. Laplace method? Although MAP can very successfully fit model parameters (θ), it can typically not learn hyper-parameters (η), such as prior variances or symmetry constraints (in our case: layer-wise equivariances). The reason that such a data fitting objective does not encourage symmetry is that symmetries constrain the functions that a network can represent. Consequently, the regular objective that maximises train data fit will always prefer as little as possible equivariance (no symmetry, so σ=0), even when more equivariance would be preferable in terms of better generalisation on validation/test data. This is why in Deep Learning it is common to select hyper-parameters using validation data, e.g. cross-validation, or more complex with RL outer loops [3] or differentiated validation losses [8]. Recent work by [5] shows that the marginal likelihood can be used to learn invariances directly on training data and demonstrated in [6] for DL. Unlike regular MAP, this objective has a built-in 'Occam's razor' effect and therefore balances good 'data fit' with model complexity (through the log determinant term). Similarly, we use Diff. Laplace to perform approximate Bayesian model selection. Crucially, we introduce a scalable parameterisation that allows differentiable layer-wise equivariances, unlike prior works that only consider invariances. We demonstrate that - in line with theory - Diff. Laplace allows us to optimise the right layer-wise equivariances η using training data. This demonstrates the principle of automatically learning layer-wise equivariances from training data. We hope that this shows that problems with the MAP objective are well-established facts. A more thorough discussion can be found in [5] and [6] and explicit examples that MAP cannot learn symmetry constraints in Sec. 4.2 of [7] and App. C of [6]. [1] Benton, Gregory, et al. "Learning invariances in neural networks from training data." NeurIPS 2020 [2] Wang, Rui, et al. "Approximately equivariant networks for imperfectly symmetric dynamics." ICML 2022 [3] Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation strategies from data." CVPR 2019 [4] Immer, Alexander, et al. "Stochastic marginal likelihood gradients using neural tangent kernels." ICML 2023 [5] Wilk, Mark van der, et al. "Learning invariances using the marginal likelihood." NeurIPS 2018 [6] Immer, Alexander, et al. "Invariance learning in deep neural networks with differentiable Laplace approximations." NeurIPS 2022 [7] van der Ouderaa, Tycho FA, et al. "Learning invariant weights in neural networks." UAI 2022 [8] Lorraine, Jonathan, et al. "Optimizing millions of hyperparameters by implicit differentiation." AISTATS 2020 --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I appreciate the authors for their prompt response and for clarifying the confusion regarding the comparison with the regular equivariant model. I have now updated my evaluation.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sequential Predictive Two-Sample and Independence Testing
Accept (poster)
Summary: [Update: After reading the other reviews and the rebuttal, I have increased my score from 7 to 8] The paper provides an algorithm for sequential testing of the homogeneity (two-sample) and independence hypothesis. The core idea is that one learns a classifier to distinguish $P$ from $Q$ (in case of two-sample testing) or $P_{XY}$ from $P_X \times P_Y$ (in case of Independence testing) from the previously seen data. This learned algorithm is used to place a bet on a new observation (how likely it is that it is from one class or the other). If it matches one updates the betting budget (starting from 1). There is also a parameter that trades-off how much of the existing budget is used at a step. If the betting budget ever exceeds $1/\alpha$, with $\alpha$ the significance level, then one rejects the test. The authors show that the test controls type-I error and is consistent under the alternative if a non-trivial classifier can be learned. Furthermore they provide a strategy to select betting fractions which improves performance, especially if the classifier during the first iterations has poor quality. Strengths: - The paper nicely combines recent work on sequential testing with the power of general classification algorithms. It does not require any specific learning algorithms but can use any existing learning framework. - The theory is presented at an adequate level. - The experiments nicely underline the theoretical statements of the work and show good empirical performance. Weaknesses: - If I understand correctly, the main innovation over Pandeva et al [2022] is that the present work uses adaptive betting fractions? Is this a correct interpretation? I struggle a bit to clearly see the parallels and differences to Pandeva et al. They formulate their test in terms of E-values. Can your tests also be rephrased in terms of E-values to make a comparison simpler? **Minor**: - It took me a bit to understand the notation in some places: - l. 75. What does $\sigma$ stand for? - After l. 104. Say that probability and expectations are wrt 1/2 (P+Q) (at least that's what I figured) - Introduce $\vee \wedge$ notation for min/max - l. 127 what is the expectation taken over? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Please see my remark above regarding Pandeva et al. Below are curiosities that could further improve the paper, but are not absolutely critical. - In Batch C2ST, where the sample size is fixed upfront, usually half the data is used for learning/testing. However, I am not aware that this is a principled choice. What if you used your approach even in batch testing to circumvent the need of selecting the splitting ratio. Can we get better power? I think it would be nice to have an experiment with exactly the same algorithm / architecture and compare the two approaches. - You prove that the type-I error is controlled even if the test is run indefinitely. I am curious: Does the test eventually exhaust the significance level? Do you have any theoretical or empirical insights? - you apply your test to every new datapoint. Panderas et al only update after a small batch of new data is presented. Have you considered this? It might be computationally a bit more convenient, and I am curious how it would affect test power. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I do not see much limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback on the paper. Below, we address in detail issues that have been raised in the review. > The main innovation over Pandeva et al [2022] is that the present work uses adaptive betting fractions?...hard to clearly see the parallels and differences to Pandeva et al. They formulate their test in terms of E-values. Can your tests also be rephrased in terms of E-values to make the comparison simpler? Our works are indeed closely related as we pointed out in the paper. However, our focus is on the sequential setting, whereas Pandeva et al. focus on the batch case. Our construction results in a nonnegative martingale starting at 1 under the null hypothesis (test martingale). Test martingales are essentially a dynamic version of (conditional) e-values (see [1] for more context). The critical difference is indeed in using adaptive betting fractions which results in (a) substantial empirical improvements in terms of power as we illustrate in Example 1 in the paper, (b) much milder conditions for consistency (Pandeva et al. do not actually even discuss consistency, but the conditions should essentially the same as for the test of Lhéritier and Cazals, whereas our test is consistent as long as the squared errors (or misclassification error for the second test) are better on average than that of a naive predictor). [1] ``Game-Theoretic Statistics and Safe Anytime-Valid Inference'', Ramdas et al., 2023 > Line 75: what does $\sigma$ stand for? $\mathcal{F}_t = \sigma(Z_1,\dots,Z_t)$ stands for the sigma-field generated by $Z_1,\dots,Z_t$, with $\mathcal{F}_0$ standing for the trivial sigma-field. > After line 104. Say that probability and expectations are wrt 1/2 (P+Q) (at least that's what I figured) Section 2 indeed corresponds to the setting of Definition 1. We have clarified that point in the revision. > Introduce $\wedge, \vee$ notation for min/max Done, thanks. > Line 127 what is the expectation taken over? Section 2 corresponds to the setting of Definition 1, i.e., $(Z_1, W_1),(Z_2, W_2),\dots$ (and $(Z,W)$) are i.i.d. draws from $\frac{1}{2}(P+Q)$, and those are the only sources of randomness over which expectations are taken over. We have clarified that point in the revision. > You prove that the type-1 error is controlled even if the test is run indefinitely. I am curious: Does the test eventually exhaust the significance level? Do you have any theoretical or empirical insights? Yes, the test "essentially" exhausts the significance level (Ville's inequality holds with equality for continuous time martingales if the limiting cumulative variance is infinite, which it will be if we never stop betting; in discrete time, there is a slight looseness due to ``overshoot" but it is a second order effect). In practice, it is indeed the case that the tests can not be run indefinitely. As we point out in Remark 1, if an analyst stops the experiment early, i.e., before the wealth exceeds $1/\alpha$, there is one modification that allows using non-exhausted type-I error budget to strictly improve power: one may use a different threshold for rejecting the null, namely $U/\alpha$, where $U$ is an independently drawn (stochastically larger than) uniform random variable on [0, 1]; see [2] for more details. [2] ``Randomized and exchangeable improvements of Markov’s, Chebyshev’s and Chernoff’s inequalities'', Ramdas and Manole, 2023. > In Batch C2ST, where the sample size is fixed upfront, usually half the data is used for learning/testing. However, I am not aware that this is a principled choice. What if you used your approach even in batch testing to circumvent the need of selecting the splitting ratio. Can we get better power? I think it would be nice to have an experiment with exactly the same algorithm/architecture and compare the two approaches. The role of sequential tests is to complement existing batch tests and not replace those. Generally, if one is ready to commit to a particular sample size, it is better to refer to batch tests. To understand the loss of power, we conducted an experiment, and the results are added to the allowed separate PDF. We take $P=\mathcal{N}(0,1)$, $Q=\mathcal{N}(\delta,1)$, and use $k$-NN predictor. We set the sample size to 1500 and compared three tests: batch (with half of the data used for training and half for inference), Seq-C-2ST (our sequential test), and Seq-C-2ST + Randomization (as per Remark 1). The batch test has (slightly) higher power than our sequential one if the sample size is specified beforehand. Yet in cases where the power is less than one using a sequential test allows collecting more data to improve it, but with the batch test, nothing can be done since the type-1 error budget is fully utilized. We have added this experiment to the Appendix and included the figure in the rebuttal. > You apply your test to every new datapoint. Pandeva et al only update after a small batch of new data is presented. Have you considered this? It might be computationally a bit more convenient, and I am curious how it would affect test power. In Appendix B, we discuss a minibatched version of our test where wealth updates are performed each time a batch of data is observed. It is a great question whether minibatches could improve power (i.e., be translated into the results about the growth rate of the underlying wealth process), and we plan to investigate this question in future work. --- Rebuttal Comment 1.1: Title: Thanks -- no further questions Comment: I thank the authors for their clarifications and adding additional experiments relative to Batch 2ST. Also they have appropriately addressed my other questions which went beyond the initial submission. I conclusion I am raising my score from 7 to 8.
Summary: The authors propose methods of two-sample and independence tests in the setting of sequentially released data. Theoretical and empirical evaluations of the proposed methods are included. Strengths: The authors propose algorithms for sequential two-sample and independence tests. The setting is interesting and important for practical applications. They provide theoretical results regarding the stop of the algorithm and the growth rate of the wealth. They also study the proposed methods numerically with synthetic and real-world data. Weaknesses: The readability can be improved. The introduction part is so long that I understand the contributions of the paper. I think the authors should split Section 1 into 2 sections: one is for explaining the motivation and the contribution, the other is for the problem setting. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In the paper, the initial values of $\lambda_t^{ONS}$ and $g_t$ are set as 0. Are there any strategies to set better initial values, or are there any empirical observations about that? - What is $\sigma(Z_1,\ldots,Z_{t-1})$ in line 75? Does it mean the $\sigma$-algebra? - In line 121, can we guarantee the existence of the minimizer $g_{\star}$? or is it just an assumption? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: In the proposed algorithm, the computation is the same for all $t$. Thus, the estimated $\lambda_t^{ONS}$ and $g_t$ for small $t$ seem to perform worse than those for large $t$. It cannot be helped to a certain extent, but in the sequential setting, I think investigating some strategies to improve the performance for small $t$ would be important. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback on the paper. Below, we address in detail issues that have been raised in the review. > Readability can be improved. The introduction part is so long that I understand the contributions of the paper. I think the authors should split Section 1 into 2 sections (motivation + contribution and problem setting). We thank the reviewer for suggestions about improving the readability of the paper. We have added more structure in the revised paper, specifically, we have added more subsections to Sections 1 and 2. > Initial values of $\lambda_t$ and $g_t$ are set as 0. Are there any better strategies / any empirical observations about that? Initializing $\lambda_t$ with 0 is not major since the online Newton step is used to update the betting fractions, and we simply used the default value specified in the earlier works. Initializing the predictor with a constant predictor is a bit more subtle: in our experiments, we trained all the models from scratch, but one can definitely use pre-trained models (e.g., weights of an image classifier trained on some other data) as initialization. We have added a remark about that point to the paper. > What is $\sigma (Z_1,\dots, Z_{t-1})$ in line 75? Does it mean the $\sigma$-algebra? This is correct. For $t\geq 1$, $\mathcal{F}\_{t}=\sigma (Z_1,\dots, Z_{t})$ stands for the sigma-field generated by $Z_1,\dots,Z_t$, and $\mathcal{F}_0$ stands for the trivial sigma-field. We have now clarified this part in the paper. > In line 121, can we guarantee the existence of the minimizer $g_\star$? or is it just an assumption? This part of the paper aims to build intuition. To define the oracle test, we only care about the risk of $g_\star$ (the optimal classifier in our class), and if the risk minimizer is not unique, $g_\star$ is chosen as any of the predictors that minimize the risk. When a classifier is trained online, then we do not need to assume convergence to $g_\star$, but the power will depend on the risk of the limiting classifier (whatever it is). > In the proposed algorithm, the computation is the same for all $t$. The estimated $\lambda_t$ and $g_t$ for small $t$ seem to perform worse than those for large $t$. It cannot be helped to a certain extent, but in the sequential setting, I think investigating some strategies to improve the performance for small $t$ would be important. Good point! The rule for updating $\lambda_t$ is indeed the same for all $t$. When the null hypothesis is false, the performance of a classifier $g_t$ clearly improves as more data are observed, and inferior performance in the beginning (when only a few points have been observed) is indeed expected. In practice, using pre-trained models (e.g., for image data) as initialization may definitely improve the performance during the early stages of testing. Another idea could be to bet slightly more conservatively for the first 20-30 (say) rounds, which is a heuristic that will affect the final regret guarantee or achieved growth rate. --- Rebuttal Comment 1.1: Comment: I have read the response. Thank you for your response.
Summary: This paper proposes two sequential predictive two-sample tests based on betting, one is constructed by the payoff function $W\cdot \mathrm{sign}[g(Z)]$ for the misclassification risk, and the other is by the payoff function $W\cdot g(Z)$ for the squared risk. The limiting growth rate and the expected growth rate of both tests for an optimal $g^*\in G$ are given, and additionally, the same quantities of the squared-risk based test for an arbitrary $g\in G$ are given. Strengths: Originality\ The paper proposes and thoroughly analyzes the sequential predictive two-sample tests; to the best of my knowledge, the proposed tests and the theoretical results are new. Quality\ The theoretical result is decent, which gives a growth rate comparison for the test based on the misclassification and the test based on the squared risk. Clarity\ The paper overall presents well. Significance\ The paper fills the blank that there is no formal construction of a classifier two-sample test using the betting framework. Weaknesses: 1. The variance of the growth rate is not given and it is actually also important in the derivation of the testing power. 2. The expected growth rate is a result of optimal $\lambda$ instead of an arbitrary one. 3. The comparison with the likelihood ratio test (Alix Lhéritier, 2018) is missing in the real-data experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why $\lambda$ is restricted to $[0, 1]$ instead of $[-1, 1]$. I can imagine a case where the payoff is negative, then using negative $\lambda$ would help increase the wealth in betting. 2. Is fitting the betting framework into the coin flip case the reason why the payoff functions are restricted to $[-1,1]$? 3. Why the ONS in algorithm 1 is different from the one in (Cutkosky and Orabona, 2018) work and (Shubhanshu Shekhar and Aaditya Ramdas, 2021), e.g., the update of $z_t$ and $\lambda_{t+1}^{ONS}$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback on the paper. Below, we address in detail issues that have been raised in the review. > Variance of the growth rate is not given ... it is actually important in the derivation of the testing power. We kindly disagree with the reviewer regarding this claim. To establish consistency of our test, it suffices to show that the growth rate of the wealth process (expected log wealth) is positive which we do: we show that the growth rate is lower bounded by a term proportional to the deviation of the risk of the limiting predictor from that of a naive predictor. Hence, as long as it is possible to learn a predictor with non-trivial prediction risk, our test is guaranteed to be consistent. > Expected growth rate is a result of optimal $\lambda$ instead of an arbitrary one. We kindly disagree with the reviewer regarding this claim. The lower bounds on the growth rate in Theorem 1 correspond to the case when the betting fractions are selected via the ONS strategy. The upper bounds indeed correspond to the optimal/oracle betting fraction $\lambda_\star$, and those are meant to quantify the best possible growth rate, and thus also show that the growth rate of ONS matches that of the oracle test. In summary, an arbitrary $\lambda$ would not work with our test: instead, we smartly set $\lambda_t$ online using ONS and the growth rate that this strategy achieves is at least (i.e. lower bounded by) that of the oracle $\lambda^*$ (which is unknown to us). > Comparison with the LR test (Alix Lhéritier, 2018) is missing in the real-data experiment. As we show through experiments, our predictive test (with CNNs) outperforms the one developed in [1] on image data. We omitted the comparison to the test of Lhéritier and Cazals [2019] since it has already been shown to be much inferior to the test developed in [1], specifically on multivariate data with localized differences between two distributions of interest (similar to the differences in our real data experiments). We have clarified that point in the revision. [1] ``Nonparametric Two-Sample Testing by Betting'', Shekhar and Ramdas, IEEE Transactions on Information Theory (to appear), 2023. > Why $\lambda$ is restricted to $[0,1]$ instead of $[-1,1]$. I can imagine a case where the payoff is negative, then using negative $\lambda$ would help increase the wealth in betting. Thanks for the great question! Technically, using the range $[-1,1]$ instead of $[0,1]$ is allowed, and none of the theoretical statements in our paper get affected. When ONS is used for selecting betting fractions, this translates into truncating $\lambda_t$ at -1/2 instead of 0. We tried both in our early experiments and did not observe any major differences in power, but truncating at 0 resulted in a bit more stable wealth processes, and hence, we used this option. If the alternative is true and a good classifier is learned online, then both of our payoffs will be positive on average, in which case it makes sense for $\lambda_t$ to be positive. We have added a short remark about this. > Is fitting the betting framework into the coin flip case the reason why the payoff functions are restricted to $[-1,1]$? Nonnegative (super)martingales starting at 1 are central in the testing by betting framework (in particular, it allows to instantiate Ville's inequality to guarantee ``time-uniform'' type-1 error control under the null hypothesis). At round $t$, the wealth is updated as $\mathcal{K}\_t = \mathcal{K}\_{t-1} \cdot (1+\lambda_t f_t(Z_t,W_t))$, and hence, restricting the payoff functions and betting fractions to $[-1,1]$ suffices to guarantee that the resulting process $(\mathcal{K}\_t)_{t\geq 0}$ is nonnegative. If the payoff is bounded in any other range, it can be transformed into $[-1,1]$, and we do not know how to design unbounded (on one side) bets that still result in nonnegative (super)martingales for our problem. > Why the ONS in algorithm 1 is different from the one in Cutkosky and Orabona ... e.g., the update of $z_t$ and $\lambda_{t+1}$. We thank the reviewer for reflecting upon that part. In the definition of $z_t$, there was indeed a typo: $z_t:=f_t/(1-\lambda_t f_t)$, which we have now fixed. Regarding the update rule for $\lambda_{t+1}$, there is a small change: instead of truncating at -1/2, we truncate at 0 (which is allowed and does not affect any theoretical claims of the paper). The reason behind this is the following: to maximize the power of the test, we aim to maximize the growth rate of the underlying wealth process, which corresponds to choosing $\lambda\_\star = \text{argmax}\_{\lambda\in [-1,1]}\mathbb{E}\log (1+\lambda f(Z_t,W_t))$. For a given predictor $g$, consider the payoff corresponding to the squared risk: $f(Z_t, W_t) = W_t\cdot g(Z_t)$, where $W_t\in\\{-1,+1\\}$ and $g_t\in [-1,1]$. Assuming the 2ST null is false and $g$ is a predictor with non-trivial prediction risk, we have that: $\mathbb{E}f(Z_t,W_t)>0$, in which case it is easy to see that $\lambda_\star$ has to be positive. In early simulations, we also did not observe any major differences between the two truncating options. We have clarified that part in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. \ **Regarding the variance of growth rate**\ I understand the test's consistency is irrelevant to the variance of the growth rate. However, I was referring to the testing power, which is a finite-sample result. That is related to the variance of the growth rate. As for the comparison with [1], did you use CNN for your method and use a kernel to construct the sequential MMD test for [1]? --- Reply to Comment 1.1.1: Comment: > Regarding the variance of growth rate: I understand the test's consistency is irrelevant to the variance of the growth rate. However, I was referring to the testing power, which is a finite-sample result. That is related to the variance of the growth rate. Variance of the log wealth is not a metric that has ever been considered before in any paper on betting from the original (non-testing) works in information theory like Kelly (1954), Breiman (1963), Cover (1970s), or in any paper on using betting for statistical hypothesis testing (i.e., papers cited in our work). It is the variance of the payoffs that is important, not the variance of the growth rate (i.e., not the variance of the log payoffs): in particular, this is the reason why we have two terms in our bounds on the growth rate and why under the ''low-variance'' regime, we get a faster growth rate (the expected margin is not squared). We have finite sample guarantees on the rate of growth of wealth (due to the use of nonasymptotic regret bounds in our proofs), but we have stated the asymptotic growth rate because it is more interpretable. Regarding additional finite-sample results, we now also have bounds for the stopping time $\tau$, namely $P(\tau>t)$ and $\mathbb{E}[\tau]$, where an upper bound on the latter easily follows from the former bound (since $\mathbb{E}[\tau] = \sum_{t=0}^\infty P(\tau>t)$) for the case when the 2ST null is false. For simplicity, assume that the same classifier $g$ (e.g., the Bayes-optimal classifier) is utilized for betting using the payoff: $f(Z_t,W_t)=W\cdot g(Z_t)$. We show that: $\mathbb{E}[\tau] \leq O\Big( \Big( \frac{1}{\mathbb{E}[W \cdot g(Z)]} + \frac{1}{\sup_{s\in[0,1]}(1-R_s(sg))} \Big)\cdot \log(1/\alpha) \Big)$, meaning that the expected stopping time is upper bounded by the sum of the reciprocals of the expected margin of a classifier and the deviation of its (optimized) squared risk from the worst-case value. We get the squared risk term exactly because we account for the variance of the payoffs (otherwise, the upper bound is proportional only to the reciprocal of the *squared* expected margin of a classifier, which is always worse). We have added this result with the proof to the Appendix, and we are more than happy to provide a short proof sketch! > As for the comparison with [1], did you use CNN for your method and use a kernel to construct the sequential MMD test for [1]? This is correct. > I re-read the paper, especially the experiments and conclusion. If I understand that correctly, the gain of testing power is attributed to the use of a predictive model over the kernel instead of the form of constructed statistic (e.g., with the payoff $W_tg(Z_t)$)? In fact, I found the gap between the proposed work and [1] smaller than I thought. The sequential MMD test, which is used as an example for the test by betting in [1], can also be simply constructed with a predictive model by replacing it with a predictive model; although might not be contained in the RKHS anymore. Hence, the superiority of the proposed method over the ''kernel'' test counterparts seems to be questionable to me. While the two betting-based tests are closely related (which we highlighted at the end of the Introduction), our paper actually constructs a different betting game from that in [1]. Both papers are indeed similar from the standpoint of using a chosen distance measure between distributions (e.g., kernel-MMD in [1], or the squared risk of a classifier in our work) as a starting point for designing effective betting strategies. However, our paper is not just about constructing a valid test, but characterizing its quality: we explicitly relate the growth rate and consistency to interpretable and intuitive metrics that are associated with classifiers, namely their risks and margin. When we use the squared risk as a measure of distance, the resulting bets indeed happen to take a similar form as in the sequential kernel-MMD test from [1]. However, our theoretical results provide practical suggestions for designing powerful predictive 2STs, e.g., we show that training a classifier via direct margin maximization / minimizing the hinge loss (which may be hinted at by the distance measure in [1]) will hurt the power of the resulting 2ST, i.e., the growth rate of the wealth. Further, we also relate our test to earlier sequential predictive 2STs and show how we address their limitations. Last, we never argued that either of the approaches: kernel- or classifier-based, is superior to the other and confirmed that across different experimental setups. Our motivation was not to develop a predictive test that should replace a kernel one but rather to complement the existing kernel approach with a new method that may be better suited for high-dimensional or structured data (like images).
Summary: The work considered the problems of sequential nonparametric two-sample and independence testing. The researchers propose a novel approach, which overcomes the issues of kernel-based testing, such as finding an appropriate kernel for high-dimensional or structured data like text and images. The authors empirically demonstrate the superiority of their tests over kernel-based methods in structured settings. Furthermore, the tests remain valid and powerful even when the data distribution drifts over time. Strengths: 1. The authors propose a prediction-based betting strategies to help alleviate problems associated with kernel selection and adaptability with high-dimensional or structured data. 2. The authors provide compelling empirical evidence that their method outperforms existing kernel-based tests, especially in the context of structured settings. 3. The author provides the theoretical analysis about the properties of the proposed method. Weaknesses: 1. The authors should provide more technical details about the principle of testing by betting strategies. This is the key concepts in the paper but the author does not elaborate it too much in the paper. 2. The paper provides a high-level overview of the research making it challenging for general readers to comprehend. 3. The presentation is poor is a way mixed with methodology, theory, and numerical study in a single section (e.g. Section 2). 4. It is uncommon that one paper focused on two testings: the two-sample test and the independence test. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. Does the proposed method have any potential limitations on data-dependency due to the use of nonparametric techniques? 2. Could the authors clarify how your method can retain its power when data distribution drifts over time? 3. It is not clear how the proposed techniques can be scalable to larger datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The authors do not mention the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback on the paper. Below, we address in detail issues that have been raised in the review. > ...more technical details about the principle of testing by betting strategies...key concept in the paper but the author does not elaborate it too much in the paper... We have added more technical details (e.g., clarifying that under the null hypothesis, the resulting wealth process is a nonnegative martingale starting at 1, which is the key to invoking Ville's inequality to justify the type-1 error control) as well as a more intuitive explanation regarding the principle of testing by betting to the revision. > A high-level overview of the research makes it challenging for general readers to comprehend the paper. To improve the comprehension for general readers, we made the following updates in the revision: (a) a more detailed explanation of the principle of testing by betting (improving on both technical and intuitive sides), (b) a more comprehensive review of the most closely related works (making it easier to disseminate the contributions of our work). > The presentation is poor...mixed methodology, theory, and numerical study in a single section (e.g. Section 2). To improve the presentation of the paper, we have separated experiments from the methods and theory in the revision. > Uncommon that one paper focuses on two-sample and independence testing We kindly disagree with the reviewer that this point represents a weakness of our paper. While it is less common for a single paper to consider both problems, the two are closely related: an algorithm for one can be used for the other and vice versa (i.e., each problem reduces to the other, but this reduction may not be the optimal way to solve the problems). In our paper, we use the connection between the two problems explicitly: we design and evaluate sequential predictive tests for both settings. > Any potential limitations of the method on data dependency due to the use of nonparametric techniques? We do not see any limitations due to the nonparametric aspect currently, but perhaps we have not fully understood what the reviewer may be hinting at. > Clarify how the method can retain its power when data distribution drifts over time? Under the alternative, the power of our test depends on the performance of a classifier according to misclassification/squared risk. If the distribution drifts over time, then models updated via online gradient descent may retain their predictive power. In such cases, our test will still have high power despite the present distribution drift. Of course, if the distribution shifts adversarially, the method will not have power, but neither will any other method. So the implicit goal is to maintain type-1 error control under the null despite shifting distributions (beyond the iid assumption typically made in the literature) while retaining power against reasonable (but not all) distribution drifts. To put this into context, we conducted the following experiment (the results have also been added to the revision) where we consider four settings: (a) $P=Q= \mathcal{N}(0,1)$, i.e., the 2ST null is true, (b) $P= \mathcal{N}(0,1)$ and $Q= \mathcal{N}(0.5,1)$, i.e., the 2ST null is false, (c) up to time $t=250$, $P^{(t)}=Q^{(t)}= \mathcal{N}(0,1)$, but for $t > 251$, we have $P^{(t)}= \mathcal{N}(0,1)$ and $Q^{(t)}= \mathcal{N}(0.5,1)$, i.e., there is a distribution shift and the 2ST null is false, (d) up to time $t=250$, $P^{(t)}=Q^{(t)}= \mathcal{N}(0,1)$, but for $t > 251$, $P^{(t)}=Q^{(t)}= \mathcal{N}(0.5,1)$, i.e., there is a change in distribution yet the 2ST null is still true. In all settings, we monitor the tests until $T=2000$ observations. We use a standard logistic regression model as an underlying predictor with weights updated via online gradient descent. We have uploaded the figure with the respective rejection rates to the allowed PDF file. In a nutshell, our test controls the type-1 error whenever the 2ST null is true even if there is a shift in distribution, and retains high power if the alternative is true. > It is not clear how the proposed techniques can be scalable to larger datasets. The main computational burden of our test lies on a selected classifier and learning algorithm. The data are processed one at a time in a sequential fashion, so it is only the per-point update cost that is critical. Hence, for many models trained online, using versions of stochastic gradient descent, like neural nets, our test will scale perfectly well to larger datasets. In fact, one of the advantages of our test over the batch tests (which would require thinking about the sample size in advance) is that our test adapts to the complexity of a problem at hand. Hence, if the null is false, our test may stop after processing a few hundred observations, whereas its batch counterpart will require training using an unnecessarily large sample. --- Rebuttal Comment 1.1: Comment: I have read the response. Thank you for your response.
Rebuttal 1: Rebuttal: Dear Reviewers, We wanted to thank you for your time and for your valuable feedback! We hope that our responses address many/most of the existing concerns. We also attach a PDF file that contains the results of several additional experiments you asked for. If you have any additional questions, we would love to hear those from you and engage in a discussion. Looking forward. Sincerely, The authors. Pdf: /pdf/51a23b480417dea0777612d0a0158e71f70746e5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Kernel-based nonparametric two sample and independence tests performance can break down in the cases of complex data like images. The paper proposes sequential two sample and independence tests based on the misclassification rate. Compared to kernel-based tests, the proposed tests, which use CNNs, reject the null hypothesis faster than sequential kernel-based two sample and independence tests on image data while still controlling the type I error rate. The power comparison between the two approaches is somewhat inconclusive. Strengths: The proposed approach is quite general and well suited to complex data like images that can be modeled with DNNs. The work is well supported theoretically. The empirical results argue favorably for the approach. The paper is well written. Weaknesses: Some discussion of the practical computational complexity of the approach compared to kernel based tests would improve the paper. An empirical comparison to kernel-based tests on simpler data would highlight the advantages/disadvantages of each approach. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Optimizing the bandwidths is one drawback of kernel tests - how much hyperparameter tuning was required to achieve the performance reported? Have the authors compared the approach to kernel-based tests on simpler data? How do the runtimes compare to kernel-based tests? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Could be expanded Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback on the paper. Below, we address in detail issues that have been raised in the review. > Some discussion of the practical computational complexity of the approach compared to kernel-based tests would improve the paper. How do the runtimes compare? Below, we assume that the tests are deployed for $d$-dimensional data. At time $T$, the total accumulated computation for the kernelized test is $O(dT^2)$. For our test, the answer depends on the chosen classifier and learning algorithm. For example, using logistic regression in combination with gradient descent for updating model parameters results in cheap updates and payoff evaluation (both are $O(d)$ at each round, and hence the total accumulated computation at time $T$ is $O(dT)$). For $k$-NN classifier, no parameters have to be updated, yet evaluating payoffs becomes more expensive with a growing sample size, resulting in the total accumulated computation of $O(k dT^2)$ at time $T$. For more complex models like neural nets, runtime depends on the chosen architecture: the total accumulated computation at time $T$ is $O((cB+F)T)$, where $F$ and $B$ are the costs of forward-propagation and back-propagation steps respectively and $c$ is the number of back-propagation steps applied after processing the next point (the exact cost depends on the architecture). We have added a paragraph about the runtime comparison to the paper. > Empirical comparison to kernel tests on simpler data would highlight the advantages/disadvantages of each approach. Overall, our empirical findings illustrate that neither of the approaches (kernel-based vs predictive) dominates the other and that our new approaches are surprisingly versatile across settings. To elaborate, some additional simulations that highlight the advantages/disadvantages of kernel approaches versus our new methods are already in Appendix E. While it may seem a natural guess that kernel methods would perform better for ``simpler'' unstructured data, our results in Appendix E suggest that this is not always the case. If there is any particular data distribution you would like us to add, we would be happy to do so. > Optimizing the bandwidths is one drawback of kernel tests - how much hyperparameter tuning was required to achieve the performance reported? For simulations on synthetic data, we utilized the knowledge of the underlying distributions to estimate kernel hyperparameters, taking those to be inversely proportional to the second moment of the random variables (details are provided in Appendix E). For simulations on real data, we utilized the median heuristic (standard practice) to estimate these hyperparameters. For our method, we applied minimal hyperparameter tuning: (a) for MLPs and CNNs, we committed to a single architecture throughout the experiments and used early stopping for regularization, (b) for experiments with $k$-NN classifier, there are no hyperparameters to be tuned except for the number of neighbors, and we used the square root rule (standard practice). In fact, one important advantage of our sequential test is that various design choices (increasing/decreasing regularization, changing neural network architecture) can also be updated on-the-fly.
null
null
null
null
null
null
Function Space Bayesian Pseudocoreset for Bayesian Neural Networks
Accept (poster)
Summary: This paper presents an alternative approach to the construction of Bayesian pseudocoresets by considering the quality of function space approximations. Specifically, they seek to minimise the KL-divergence between function space approximations of posteriors conditioned on the Bayesian pseudocoreset, and the true posterior. Strengths: * I consider the method very well motivated---the use of variational approximations in weight-space is known to result in pathologies, particularly with high-dimensional models such as BNNs. Function space approximations offer a solution to this by instead computing variational approximations to distributions that exhibit much less multi-modality and related behaviour that is difficult to approximate. * Although the authors largely follow the work of Rudner et al., they depart from this method in estimating the parameter values of the function space approximation. Instead, they empirical estimates which, although inexact, I expect result in considerable speed-up in computation time. * Experimental results are good, with the method outperforming the benchmarks. Weaknesses: * A lot of approximations are involved in the method, and it would be nice to see some evaluation of the effects of each. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive comments. For the weaknesses, please refer to our general responses [G3, G4, G5]. --- Rebuttal Comment 1.1: Comment: Many thanks for your response. Regarding [G4]. The Jacobian approximation is not the only approximation used---indeed, the use of the Jacobian is an approximation in itself. A comment on the implications of the linearised Laplace approximation would be greatly appreciated in an updated version of the paper. [G5] discusses the aforementioned points, however doesn't extend beyond comments on scalability of the algorithm. Again, it would be beneficial to the reader to understand the implications of a Gaussian approximation in function space. Nonetheless, I maintain my original score.
Summary: The following work proposes to a new way to construct Bayesian Pseudo-Coresets. Particularly, the authors propose to optimise the KL-Divergence between posteriors associated with real data and synthetic in function space rather than the parameter space of large networks. The main argument posed by the authors is that function space is typically lower in dimension as compared to the parameter space of the network. Further, the authors use the theory laid out in [1] to approximate the function space posteriors associated with real data and pseudo-coreset. Further, since the function space can be same for several architectures, the proposed method can also be used with multiple architectures. Strengths: 1. The argument behind the use of function space seems logical enough, inference in high dimensional parameter space is indeed difficult. 2. The demonstrated result shows decent improvement compared to SoTA BPC methods. 3. The “multi-architecture” setting that comes for free because of working in the function space is very intriguing. To the best of my knowledge, this is the first work that enables one to use multiple architectures. Weaknesses: 1. Ambiguous writing. 2. Experiments can be better. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It seems that the paper could have been better written and presented. Especially for someone who is not aware of function space variational inference, the paper is very hard to follow through. Perhaps sections 2.1 and 2.2 can be trimmed to accommodate some background on function-space VI. 2. Furthermore, related to the above point, though I understand why the term “function-space” might be appropriate, it gives an impression that authors are referring to “space of functions”. Then, it is mentioned that “the function space is typical of much lower dimension compared to the weight space”. However, the space of functions has infinite dimensions. Are the authors may be referring to the dimension of the range of functions? This is actually a bit confusing. If this is not the case, then the motivation of function space VI is not strong enough. 3. In Eq. 16, the authors note that while calculating $\hat{\mu_u}$ there is a stop-gradient operator in action. The same is not said for the step calculating $\hat{\Phi_u}$. I assume this means that the gradients are back-propagated through the SGD steps to update the pseudo-corsets. If yes, then I would like to see the effect of removing the stop-gradient operator while calculating $\hat{\mu_u}$. Further, doesn’t this make the proposed method computationally expensive too? As is the case with BPC? However, if this is not the case, then this should be highlighted more explicitly. 4. The authors have shown the architecture generalization by changing the type of normalization layer for a single architecture. However, in my opinion, this is not good enough. In dataset summarisation literature, architecture generalization is often shown by changing the architecture itself. Hence, I would suggest bringing Tables 5 & 6 from the appendix to the main manuscript. 5. Do the authors make use of Differentiable Siamese Augmentation (DSA) [2]? This is not mentioned in the main text and can affect the result by large margins. If not, then I suggest the authors try it as this can improve the results as well. 6. I think the authors can also include a few basic Dataset Condensation methods in comparison for eg. GM [3], DM [4], DSA [2]. 7. In Table 6 of the appendix, the authors note that there is performance degradation while using big architectures like ResNet. However, doesn’t this in some sense defy the motivation noted in the proposed work? Since, the proposed method operated in function space rather than the parameter space which (as the authors say) is lower dimensional, shouldn’t the performance be agnostic to architecture choice? 8. Few recent works might be relevant to mention in related works [5, 6, 7] [1] T. G. Rudner, Z. Chen, Y. W. Teh, and Y. Gal. Tractable function-space variational inference in bayesian neural networks, 2022. [2] B. Zhao and H. Bilen. Dataset condensation with differentiable siamese augmentation. In Proceedings of The 38th International Conference on Machine Learning (ICML 2021), 2021. [3] B. Zhao, K. R. Mopuri, and H. Bilen. Dataset condensation with gradient matching. In International Conference on Learning Representations, 2021. [4] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6514–6523, 2023 [5] Xu, Yue, et al. "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection." arXiv preprint arXiv:2305.18381 (2023). [6] Tiwary, Piyush, Kumar Shubham, and Vivek Kashyap. "Constructing Bayesian Pseudo-Coresets using Contrastive Divergence." arXiv preprint arXiv:2303.11278 (2023). [7] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective." arXiv preprint arXiv:2306.13092 (2023). Overall, the paper is definitely intriguing and excites the reader. However, the lack of background in the paper confuses the reader. Further, some points need to be clarified (see above). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive comments. We respond to the individual comments below: **[W1, Q1]** Thank you for the valuable suggestion. We will consider reorganizing the paragraphs during the paper revision. **[Q2]** I agree with your point that the statement 'function space itself has a lower dimension compared to weight space' can be confusing. It seems better to rephrase it as 'functions have a lower dimension than weights,' as you suggested. **[Q3]** In fact, in the process of calculating $\hat\Psi$ in Equation 16, we have indicated the stop-gradient operator as **'sg.'** This implies that back-propagation is no longer applied to the parameters computed during SGD steps in our approach. The gradient of the pseudocoreset flows through the input $u$ in the calculation of $\hat\mu$. Moreover, it requires several tens of SGD steps to reach the MAP solution of the pseudocoreset, and an additional 30 SGD steps are used to obtain empirical variance. Storing gradients throughout all these processes would be computationally very expensive. This does not align with our intention to create a scalable BPC algorithm. **[Q4]** We will make sure to do that during the revision. **[Q5]** We agree with the statement that DSA [1] impacts the performance. Following the convention in the existing literature, we employed DSA in our approach and will make sure to mention it in the paper. **[Q6]** To the best of our knowledge, dataset distillation primarily focuses on achieving test performance matching, which can be seen as making the **point estimation** performance of parameters align as an objective. Accordingly, the evaluation of the distilled dataset is compared using SGD performance. On the other hand, methods like ours BPC aim to address how well **posterior distributions align.** Consequently, in our paper, we evaluate using the SGHMC sampling technique. Nevertheless, for the purpose of comparing with the relevant field, a comparison against the SGD performance of a few Dataset condensation methods yields the following [1,2,3]. **< Test accuracies for CIFAR10 dataset >** | ipc | GM (SGD) | DSA (SGD) | DM (SGD) | Ours (SGHMC) | |-----|----------------|----------------|----------------|----------------| | 1 | 28.3 $\pm$ 0.5 | 28.8 $\pm$ 0.7 | 26.0 $\pm$ 0.8 | **35.5 $\pm$ 0.3** | | 10 | 44.9 $\pm$ 0.5 | 52.1 $\pm$ 0.5 | 48.9 $\pm$ 0.6 | **62.3 $\pm$ 0.3** | | 50 | 53.9 $\pm$ 0.5 | 60.6 $\pm$ 0.5 | 63.0 $\pm$ 0.4 | **71.2 $\pm$ 0.2** | **[Q7]** In the appendix, we have indicated that there is typically a performance degradation of the pseudocoreset in larger architectures such as ResNet. We believe this is primarily due to the limited dataset size used to evaluate the trained pseudocoreset, leading to increased overfitting in these larger architectures. This phenomenon is also commonly observed in existing dataset distillation methods. Our contribution, utilizing the function space, enables more scalable use of larger architectures like ResNet for the pseudocoreset training, and our method's performance on ResNet is superior to that of existing literature [4]. This reaffirms the validity of our approach. **[Q8]** We appreciate the valuable recommendations, and we will include the mentioned papers in the related work section during the revision. **[W2]** Please refer to our general responses [G2, G3]. --- **References** [1] B. Zhao and H. Bilen. “Dataset condensation with differentiable siamese augmentation.” In Proceedings of The 38th International Conference on Machine Learning (ICML 2021), 2021. [2] G. Cazenavette, T. Wang, A. Torralba, A.A. Efros, J.Y. Zhu. “Dataset Distillation by Matching Training Trajectories.” CVPR, 2022. [3] Bo Zhao, Hakan Bilen. “Dataset Condensation with Distribution Matching.” WACV 2023, 2023. [4] R. Yu, S. Liu, and X. Wang. Dataset distillation: A comprehensive review, 2023. --- Rebuttal Comment 1.1: Title: Response on the rebuttal. Comment: Thanks, Authors for clarification. Some of my questions are answered, but not all. (i) The computational complexity has to be quantified. (ii) Distillation methods that are compared are rather old (There are recent distillation methods such as MTT). (iii) I am still not convinced about the scalability of the approach with larger architectures. (iv) Performance of the method without the use of DSA needs to be measured and reported. --- Reply to Comment 1.1.1: Title: Response to the response Comment: We appreciate your effort to review our paper and responses. **(i)** Our algorithm's iteration consists of training BPC to find the MAP solution, slightly advancing SGD steps to obtain the empirical covariance for eq.17, and calculating the final loss to update the pseudocoreset. In terms of memory considerations, as previously mentioned in Section 3.5 and general response [G1], FBPC excels. However, in terms of time, our method requires slightly more time because more SGD steps are needed to acquire the empirical covariance. However, when using FBPC-random, these steps can be reduced, trading off a slight decrease in performance for time savings. Mainly the time complexity is typically dominated by the SGD steps required to find the MAP solution. Finally, representing the time taken for one pseudocoreset update in wall clock time is as follows. We will include this discussion in the future revisions of the paper. **< wall-clock time (sec) for 1 step update (CIFAR10) >** | ipc | 1 | 10 | 50 | |:-----------------:|:---------------:|:---------------:|:----------------:| | BPC-fKL | 1.04 $\pm$ 0.10 | 1.37 $\pm$ 0.13 | 2.59 $\pm$ 0.86 | | FBPC | 1.5 $\pm$ 0.15 | 3.29 $\pm$ 0.51 | 8.38 $\pm$ 0.48 | **(ii)** We have conducted a performance comparison between our method, MTT [1], and FrePo [2] in the field of dataset distillation. MTT and FrePo present SGD performance from their respective papers, while our FBPC demonstrates SGHMC performance as outlined in our paper. When compared with other dataset distillation methods, we believe that our approach achieves performance comparable to the state-of-the-art DD method. Additionally, it is noteworthy that our method consumes significantly less memory than MTT, and it can also handle cases like Tiny-ImageNet ipc 50, which were not achievable using the FrePo approach. | | ipc | MTT (SGD) | FrePo (SGD) | FBPC (SGHMC) | |:-------------:|:---:|:----:|:-----:|:----:| | CIFAR10 | 1 | 46.3 | 46.8 | 35.4 | | | 10 | 65.3 | 65.5 | 62.3 | | | 50 | 71.6 | 71.7 | 71.2 | | CIFAR100 | 1 | 24.3 | 28.7 | 21.0 | | | 10 | 40.1 | 42.5 | 39.7 | | | 50 | 47.7 | 44.3 | 44.4 | | Tiny-ImageNet | 1 | 8.8 | 15.4 | 10.1 | | | 10 | 23.2 | 25.4 | 19.4 | | | 50 | 28.0 | - | 26.4 | **(iii)** As indicated in the general response [G1], as the network size increases, the memory requirements for weight space BPC can become prohibitively large, rendering training infeasible. In contrast, our FBPC offers a distinct advantage, as it does not impose significant memory burdens even for larger architectures. Therefore, conducting experiments with FBPC is not hindered by memory constraints, allowing us to explore results on larger architectures. Indeed, we plan to conduct experiments on even larger architectures to provide a comprehensive understanding of our method's scalability. However, for new architectures, the creation of expert trajectories is a prerequisite. Given the potential time-consuming nature of this process, particularly for substantial architectures, we aim to share the results as soon as experimentation is completed. If there are any other concerns, please feel free to inform us. We appreciate your input and are committed to addressing any additional questions or considerations. **(iv)** We present results for BPC-fKL and FBPC without using DSA [3] and without any augmentation during training, and we will include these details in the paper. Interestingly, for the ipc 1 case in BPC-fKL, performance improved when DSA was not applied. However, in all other cases, it is evident that not using DSA leads to an average performance drop of approximately 4.7%. Moreover, even when training BPC without augmentation, we observe that function space BPC outperforms weight space BPC. **<BPC Performance with and without DSA>** | ipc | 1 | 10 | 50 | |------------------|----------------------|----------------------|----------------------| | BPC-fKL (no DSA) | **37.26 $\pm$ 1.65** | 50.48 $\pm$ 1.39 | 60.75 $\pm$ 0.26 | | FBPC (no DSA) | 33.69 $\pm$ 2.73 | 55.07 $\pm$ 1.30 | 66.03 $\pm$ 0.21 | | FBPC (DSA) | 35.45 $\pm$ 0.31 | **62.33 $\pm$ 0.34** | **71.23 $\pm$ 0.17** | --- **References** [1] G. Cazenavette, T. Wang, A. Torralba, A.A. Efros, J.Y. Zhu. “Dataset Distillation by Matching Training Trajectories.” CVPR, 2022. [2] Yongchao Zhou, Ehsan Nezhadarya, Jimmy Ba. "Dataset Distillation using Neural Feature Regression." NeurIPS, 2022. [3] Zhao et al. “Dataset condensation with differentiable siamese augmentation.” ICML, 2021.
Summary: This paper introduces a novel approach called Function Space Bayesian Pseudocoreset (FBPC) for constructing Bayesian pseudocoresets for Bayesian Neural Networks. A Bayesian pseudocoreset is a compact synthetic dataset that summarizes essential information from a large-scale dataset and can be used as a proxy dataset for scalable Bayesian inference. Typically, the construction of a Bayesian pseudocoreset involves minimizing the divergence between the posterior conditioning on the pseudocoreset and the posterior conditioning on the full dataset. However, evaluating this divergence measure can be challenging, especially for models like deep neural networks with high-dimensional parameters. In contrast to previous methods that construct and match coreset and full data posteriors in weight space (model parameter space), this proposed method operates directly in function space. By working directly in function space instead of weight space, several challenges such as limited scalability and multi-modality issues can be bypassed. The method constructs variational approximations to the coreset posterior on function space by linearization and variational approximation to true posterior distributions. It then matches these approximations with full data posteriors also defined in function spaces. Working directly in function spaces allows this approach to scale well even for large models where traditional weight-space approaches struggle computationally. Additionally, it does not constrain matching only specific architectures of neural networks but rather allows training with the equivalent of multiple architectures simultaneously while still achieving similar results. Furthermore, using function-space matching improves out-of-distribution robustness compared to previous approaches based on weight spaces.Overall, experiments conducted demonstrate that using Function Space Bayesian Pseudocoresets leads to enhanced uncertainty quantification abilities compared to traditional methods based solely on weights or model parameters. Strengths: * The authors tackle a very interesting problem, extracting reliable posterior distributions out of BNNs to perform more precise inference. The Bayesian pseudocoreset approach allows for a simple but practical solution to this task. * The paper is easy to follow and is very well written. The authors provide much or the context needed to understand the submission and the previous state-of-the-art. * The function space approach is a very promising framework to conduct inference in BNNs and other models, and advances such as this one would help spreading its usage. Weaknesses: * While reading the paper, I could not help but miss some a comparison with the concept of inducing points, so common in sparse variational approximations for Gaussian Processes. From my point of view this is missing and would be interesting, since the proposed approach seems to be conceptually very strongly related to SVGPs, so somewhat including a comparison would be helpful. * To avoid scalability issues regarding the Jacobian defined in the linearized approximation, the authors then approximate the posterior in function space using variational Gaussian distributions that then myst be fit. This can be scalable, although I fear it may induce strong bias to the final posterior estimate, since there is no guarantee that the posterior distribution would behave in this manner. * I think that some contributions relevant to the topic at hand are left out from the related work section, specially in the function space variational inference paragraph in section 4. I suggest including [1,5] as context, maybe [6] as well for context. Moreover, in terms of approximating the posterior distribution from BNNs, all a-posteriori Laplace approximation-based approaches are missing, such as [4] and [7]. These last two are not crucial for the topic at hand, although could serve to complete further the discussion. * I would expect a more detailed experimental phase, characterizing further the properties of the model. I would suggest complementing the results with some regression results. Moreover, it would be interesting to compare the resulting predictive distribution in each case with other models such as HMC, MFVI or other function space methods s.a. [1,2,3,5]. Moreover, it would be interesting to know how the distributions obtained compare against a-posteriori methods s.a. [4]. * I would include some scalability study for the method, specially in terms of the size and dimensionality of the dataset. _Note:_ I condition my final evaluation on the submission to the fact that the authors address these issues. Were this not the case, I may consider revising the score. --- **References**: [1] Rodrı́guez-Santana, S., Zaldivar, B., & Hernandez-Lobato, D. (2022, June). Function-space Inference with Sparse Implicit Processes. In International Conference on Machine Learning (pp. 18723-18740). PMLR. [2] Rudner, T. G., Chen, Z., Teh, Y. W., & Gal, Y. (2022). Tractable function-space variational inference in Bayesian neural networks. Advances in Neural Information Processing Systems, 35, 22686-22698. [3] Ma, C., & Hernández-Lobato, J. M. (2021). Functional variational inference based on stochastic process generators. Advances in Neural Information Processing Systems, 34, 21795-21807. [4] Deng, Z., Zhou, F., & Zhu, J. (2022). Accelerated Linearized Laplace Approximation for Bayesian Deep Learning. Advances in Neural Information Processing Systems, 35, 2695-2708. [5] Ma, C., Li, Y., and Hernández-Lobato, J. M. (2019). “Variational implicit processes”. In: International Conference on Machine Learning, pp. 4222–4233. [6] Fortuin, V. (2022). Priors in bayesian deep learning: A review. International Statistical Review, 90(3), 563-591. [7] Antorán, J., Janz, D., Allingham, J. U., Daxberger, E., Barbano, R. R., Nalisnick, E., & Hernández-Lobato, J. M. (2022, June). Adapting the linearised laplace model evidence for modern deep learning. In International Conference on Machine Learning (pp. 796-821). PMLR. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * What role does the forward KL divergence play in constructing the Bayesian pseudocoresets, and why would you prefer this to well-defined distance measurement or any other divergence measure? Does the assymetry of the KL divergence bias the results in any significative manner that one should be aware of when using the method? * Have you performed any checks comparing the usage of the full Jacobian [lines 151 and 152] with the case where you induce the variational approximateion to the finite-dimensional distributions of the posteriors in function space? I agree that this cannot be done in general, specially for big BNNs where the computational cost of computing the Jacobian is high. However, tests could be performed in small BNNs to check whether this assumption holds in those simpler instances. If you consider this to not be necessary, please argue why. * Robustness to the selection of BNN architecture is an important point in the submission. While I agree that the results obtained in this regard seem to point in that direction, I would also argue that differences may be observed if it were not for the approximations regarding the linearization of the system and the imposed Gaussian variational form. Is there any reason to think otherwise? * Why would this approach be preferred over those that obtain predictive distributions with a-posteriori approximations s.a. [4] and [7]? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: * The Bayesian pseudocoreset is only constructed comparing Gaussian distributions, which may be insufficient to express different posteriors induced by the data. * The experimental part of the submission could be improved. * There are no studies on the scalability of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive comments. We respond to the individual comments below: **[W1]** Thanks for your insightful comment. We will include the relevant discussion in the paper. As you pointed out, inducing points in Stochastic Variational Gaussian Processes and the functional Bayesian pseudo coresets in BNN are similar in their purpose. * SVGP : In the context of Stochastic Variational Gaussian Processes, the process of optimizing the inducing points denoted as $Z=${$z_1,...,z_n$} involves maximizing the ELBO in order to make a variational Gaussian distribution $q(f_{tr},f_z)$ well approximate the posterior distribution $p(f_{tr},f_z|y_{tr})$. This variational distribution is composed of two parts: $p(f_{tr}|f_z)$, which represents the Gaussian posterior distribution, and $q(f_z)$, which is the Gaussian variational distribution. During this optimization, we focus on training the inducing points $Z$ as well as the mean and variance of the variational distribution $q(f_z)$. The goal of this optimization is to create a good approximation of the posterior distribution $p(y_{te}|x_{te}, D_{tr})$ during the inference process, all while keeping the computational cost low. * FBPC : As outlined in section 3.2, the formulation of FBPC involves directly minimizing the divergence between function space posteriors, specifically $D(\nu_x, \nu_u)$. To sum up, while they do share some similarity in that they introduce a set of learnable pseudo data points, they are fundamentally different in their learning objectives. SVGP is interested in approximating the full data posterior through the inducing points, while ours aims to make the pseudocoreset posterior as close as possible to the full data posterior. **[W2]** Please refer to our general response [G5]. **[W3]** We appreciate the valuable recommendations, and we will include the mentioned papers in the related work section during the revision. **[W4]** Please refer to our general response [G2] for the regression experiment. As you mentioned, a comparison with other methods like HMC would indeed be interesting. Analyzing the differences between the predictive distributions obtained from a-posteriori methods and those obtained from BPC could be an engaging avenue for future work. **[W5]** Please refer to our general response [G1]. --- **[Q1]** There are two major advantages in using the forward KL divergence. * Firstly, when compared to the reverse KL divergence, which has a mode capturing property, forward KL minimization favors solutions covering the entire target distribution. This characteristic makes it more suitable for Bayesian pseudocoreset construction, as evaluating model performance through Bayesian model averaging (BMA) allows better consideration of diversity over the entire distribution rather than focusing on individual modes. This advantage is also acknowledged in [1], where they compared three divergence measures—forward KL, reverse KL, and Wasserstein distance—and demonstrated the superiority of forward KL. * Secondly, another advantage of using forward KL is the ease of computing the objective. While there are many other well-defined metrics available, for BPC optimization, it is crucial to have the gradient of the divergence between two posteriors with respect to BPC. We were able to derive the derivative for forward KL through proposition 3.1, which enabled us to compute FBPC. Exploring the possibility of similar derivations for other divergences could be a promising topic for future work. **[Q2]** Please refer to our general response [G4]. **[Q3]** Thank you for the insightful comment. While it may raise similar concerns as you mentioned, Table 6 of the Appendix demonstrates the simultaneous learning of pseudocoresets for two entirely different architectures: CNN and ResNet. This result illustrates that our method is capable of accommodating various architectures, and it is not solely reliant on the linearization and Gaussian variational form. Instead, we attribute this capability to the effect of similar function posteriors mentioned in Section 3.4. In addition, we explored using a diagonal covariance to achieve a more refined approximation of the posterior in weight space BPC. However, as shown below, this approach still did not yield significant benefits in terms of architecture generalization. We believe that matching the function posterior distribution is more helpful for enhancing architecture robustness, rather than solely focusing on how well we approximate the posterior in weight space. | BPC-fKL | LN | BN | GN | IN | |----------|-----------|-----------|-----------|-----------| | Accuracy | 40.06 | 46.74 | 40.96 | 53.07 | **[Q4]** As you mentioned, there are indeed various methods to obtain predictive distributions, including a-posteriori approximations [2,3]. It is worth noting that our proposed method, utilizing the pseudocoreset, can also be applied in conjunction with a-posteriori methods to further reduce computational burden while obtaining predictive distributions. Additionally, our approach offers the advantage of obtaining a **small condensed dataset** which can have versatile applications. This condensed dataset can be leveraged as memory data in transfer or continual learning settings, and it may hold value as a learnable prior in Bayesian settings. Utilizing a pseudocoreset as a learnable prior, rather than storing weights directly, opens up interesting possibilities for future work. Investigating the feasibility of using small datasets as a learnable prior represents a promising avenue for further research. --- **References** [1] Kim et al. “On Divergence Measures for Bayesian Pseudocoresets.” NeurIPS 2022. [2] Daxberger et al. “Laplace Redux -- Effortless Bayesian Deep Learning.” NeurIPS 2021. [3] Deng, Z., Zhou, F., & Zhu, J. “Accelerated Linearized Laplace Approximation for Bayesian Deep Learning.” NeurIPS 2021. --- Rebuttal Comment 1.1: Title: Brief response to the rebuttal Comment: I would like to thank the authors for the extensive work put forth in the rebuttal of all the reviews. I consider there is a good amount of work behind the responses, and I appreciate it deeply. As far as I am concerned, I consider my questions and doubts mostly addressed. I would suggest the authors augmenting the original text with some of the rebuttals' contents in order to provide a more complete explanation: for example, the discussion around SVGPs, the choice of divergences or the usage of _a posteriori_ techniques may result enriching. I will maintain my score since I think it's suitable, but I am in fact even more positive about this submission than earlier. Thanks again!
Summary: This paper combines the works of Kim et al. (2022) and Rudner et al. (2022) to propose a Bayesian coreset learning method based on function space variational inference. The authors find that this leads to coresets with improved accuracy/NLL compared to (Kim et al., 2022) on CIFAR and Tiny-ImageNet benchmarks as well as improved robustness against data corruption on CIFAR10. Finally, the authors argue that their method allows for improved incorporation of different architectures in coreset learning based on an improved generalization across architectures with different types of normalization layers for CIFAR10. Generally, I found the paper quite incremental, although I appreciate that clear credit is given to prior work. The use of terminology is also rather loose, not any ad-hoc construction of a distribution over the weights constitutes an approximation to a Bayesian posterior. Finally, while performance is improved over that of Kim et al. (2022), in absolute terms it is still really far off that of training on the full dataset. I would really want to see that the coreset performance converges to that of training on the full dataset, so at this point I would argue for rejection. Strengths: * The method is new and described clearly * Clear credit is given to prior work that the method is based on * Performance is better than that of the baseline * The architecture generalization supports/motivates the use of function space inference Weaknesses: * The method as such seems really incremental, essentially it plugs the functional variational inference of Rudner et al. (2022) into the coreset learning and posterior “approximation” method of Kim et al. (2022) * That being said, calling a Gaussian distribution constructed from an optimization trajectory by taking the mean and variance of the parameters and calling it a variational distribution seems like a bit of an abuse of terminology. Not every distribution approximates the posterior. Further, this approach seems mostly equivalent to SWAG (Maddox et al., 2019), so this should be cited (and any potential difference discussed if applicable). * The absolute accuracies even for the largest coresets are still really far off what we would get when training on the full dataset. So the method seem quite far away from being useful to practitioners. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Does the gap with full training close when further increasing the coreset size? * The experimental section states that BPC-fKL was not possible on Tiny-Imagenet with 50 coreset points: why is that exactly? It would be useful to have some more details on memory consumption/runtime and the corresponding scaling behavior with coreset size/dateset size/dimension/number of parameters for the different methods. Reporting. some empirical numbers from the experiments on VRAM use etc could be helpful. **Typos/minor:** * l65-66: remove “number of”? * l106: “the means” -> “means” * l238: “we” -> “We” * l247: “refer the appendix” -> “refer to the appendix” * eq (6): $\Sigma_u, \Sigma_x$ appear for the first time here, but are only defined after eq (12). * I found the name for the FBPC-random baseline really confusing vs the random coreset points baseline, perhaps calling FBPC-iso or FBPC-isotropic would be more descriptive? * The references could use another round of editing, please ensure that all citations have a venue (missing e.g. for 1, 2, 6, 15, 22, 27), and that title capitalization and venue names are consistent. * The abstract is a bit verbose while providing fairly limited specifics. I would suggest cutting back the introductory first half — unfamiliar readers won’t learn much from this anyway and experts know all of this already. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I would have liked to see some discussion on the large gap to training on the full dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive comments. We respond to the individual comments below: **[W1]** We agree that our method extends the discussion in Kim et al. [1] to function space posterior. However, we want to emphasize additional significant contributions in our work. * Firstly, we have demonstrated that obtaining a lower-dimensional function space matching for Bayesian pseudocoreset is easier than complex weight posterior matching, which is a novel finding. * Secondly, we propose scalable methods (Section 3.2, 3.3) for matching function space posterior, enabling computations in high-dimensional BNNs. Our algorithm that constructs variational posteriors directly in function spaces via trajectory statistics is significantly different from the previous function-space VI methods [2, 3] where they construct variational posteriors as pushforwards of weight-space variational posteriors. * Furthermore, by leveraging the similarity of function space across different architectures, our method can create architecture-robust BPC. These unique contributions distinguish our paper from others in the field. We sincerely appreciate your consideration of these points. **[W2]** Thank you for the valuable feedback. We will carefully consider the replacement regarding the terminology 'variational distribution' in our paper during the revision period. Additionally, we have identified the similarities between our method for creating posterior distributions and the SWAG method [4]. We will add this discussion to the related work section of the paper. But again, please note that while SWAG considers collecting statistics on weight space trajectories, ours constructs statistics in **function spaces**, which makes ours more suitable and scalable for pseudocoreset construction. **[W3]** Thank you for the thoughtful comment. It is true that there remains a gap between the performance achievable by training large-size models on full data and the results obtained through our approach. We have also recognized and contemplated this issue, and that led us to propose the FBPC method, which addresses the scalability problem of Bayesian pseudocoresets and allows us to extend the models and datasets to sizes that were not easily achievable with existing Bayesian pseudocoreset methods. Through FBPC, we successfully scaled our approach to handle larger models and datasets, demonstrating its potential in bridging the performance gap further. We agree that future research should continue in this direction, and we believe that our method serves as a valuable starting point for such endeavors. --- **[Q1]** Yes, that's correct. When we increased the coreset size to 100, we obtained an accuracy of 73.45, and when increased to 200, the accuracy reached 74.80. It is evident that the performance increases as the size is expanded. **[Q2]** As the coreset size, ipc, and network size increase, memory usage also increases, and the memory burden may make conducting experiments in our current environment impractical. Specifically, BPC-fKL requires Monte Carlo samples corresponding to the parameter dimension size, whereas FBPC only needs samples from the function space dimension, resulting in a significant difference in memory usage. However, during the rebuttal period, we discovered that by reducing the pseudocoreset batch size during training, it becomes feasible to run experiments on a 3090 GPU. With a batch size of 100 during training, we achieved performances of accuracy 22.18 $\pm$ 0.32 and NLL 4.65 $\pm$ 0.02 where our FBPC achieved performances of accuracy 26.43 $\pm$ 0.31 and NLL 4.30 $\pm$ 0.05. For scalability study of our method, please refer to our general response [G1]. **Typos/minor** Thank you for finding them and we will correct them. --- **References** [1] Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha and Juho Lee. “On Divergence Measures for Bayesian Pseudocoresets.” In Advances in Neural Information Processing Systems 36 (NeurIPS 2022), 2022. [2] T. G. J. Rudner, G. Chen, and Y. Gal. “Rethinking function space variational inference in Bayesian neural networks.” 3rd Symposium on Advances in Approximate Bayesian Inference, 2020. [3] Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh and Yarin Gal. “Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In Advances in Neural Information Processing Systems 36 (NeurIPS 2022), 2022. [4] Wesley J. Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov and Andrew Gordon Wilson. “A Simple Baseline for Bayesian Uncertainty in Deep Learning.” In Advances in Neural Information Processing Systems 33 (NeurIPS 2019), 2019. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and constructive response. I had a closer look through some of the related works provided by Rd2wU and the results on cifar10 at least seem to be in line with the literature (it might be worth citing a couple of these numbers), so I would say that performance is not strictly a barrier to acceptance and I will increase my score. However, I do still find the work for the most part very incremental, so I will only go up by 1 point.
Rebuttal 1: Rebuttal: We express our gratitude to all the reviewers for their valuable and insightful feedback. They have acknowledged the clarity in our method description (R-sWjp, R-9hB5) as well as our coverage of relevant prior works (R-9hB5). The reviewers have also recognized the paper's strong motivation and the significance of addressing an engaging problem (R-sWjp, R-9hB5, R-d2wU, R-jYGC). Moreover, they have emphasized that our proposed FBPC method demonstrates superior performance compared to other BPC approaches (R-sWjp, R-d2wU, R-jYGC), highlighting its practicality in Bayesian neural network inference (R-9hB5, R-jYGC). The reviewers have also highlighted our contributions, specifically our achievements in a multi-architecture setting (R-d2wU) and the empirical estimation of function space posterior (R-jYGC). We sincerely appreciate all the constructive comments and we provide general responses addressing questions that were commonly raised by the reviewers below: **[G1] (Scalability study)** In our paper, we proposed a more scalable BPC construction algorithm by focusing on distribution matching in the function space rather than the weight space. Therefore, we can effectively and efficiently create BPC even as the number of parameters increases. To compare how scalable our approach is compared to posterior matching in the weight space, we measured GPU memory usage corresponding to the number of parameters, and the results are as follows. As shown in the table, the memory usage for weight space BPC significantly increases as the number of parameters grows, while FBPC operates very efficiently. Additionally, the coreset ipc increases memory usage proportionally to its size. We believe the same principle applies to dataset size as well. **<GPU memory usage (GB) (CIFAR10)>** | ipc10 | LeNet | ConvNet | ResNet18 | |--------------|-----------------|-----------------|-----------------| | # Parameters | $6.2 \times 10^4$ | $3.2 \times 10^5$ | $1.1 \times 10^7$ | | FBPC | 0.02 | 0.32 | 2.56 | | BPC-FKL | 0.11 | 3.17 | 12.18 | | ipc | 1 | 10 | 50 | |---------|------|------|-------| | FBPC | 0.04 | 0.32 | 1.59 | | BPC-fKL | 0.41 | 3.17 | 15.59 | **[G2] (More experiments)** Our paper introduces a scalable BPC construction method for BNN classification. We extended our exploration to regression, using the yacht dataset with (308, 6) dimensions and an MLP with one hidden layer. The test mean squared error (MSE) for different coreset sizes is as follows. This confirms our method's effectiveness in regression. The 'Random' method selected samples randomly from real datasets, potentially including outliers. As coreset size increases, the trained FBPC's test MSE approaches 0.0647, matching the test MSE from training on the entire dataset. | | 10 | 20 | 30 | 50 | 100 | |--------|--------|--------|--------|--------|--------| | Random | 0.3424 | 0.5335 | 0.2757 | 0.1230 | 0.0931 | | FBPC | 0.3148 | 0.2541 | 0.1434 | 0.0854 | 0.0829 | | FBPC (Jacobian) | 0.3156 | 0.2460 | 0.1413 | 0.0872 | 0.0825 | **[G3] (A lot of approximations)** To understand the impact of these approximations, we conducted regression task experiments on small BNNs, where it was possible to obtain the full Jacobian." As seen in the table from [G2], the results were not significantly different from ours. Analyzing the effects of each approximation in larger models and researching more refined and scalable approximation methods would be a valuable direction for future research. **[G4] (Possibility of the usage of the full Jacobian)** Incorporating the full Jacobian in computations is challenging, especially for our pseudocoreset updates that involve frequent calculations of equations 12 and 13. Computing $\Sigma_x$ and $\Sigma_u$ requires inverting the Hessian matrix, often approximated using the Fisher information matrix. For the full dataset, this becomes computationally demanding, and for the coreset, small size leads to high variance with the standard Fisher approximation. These challenges persist even for small BNNs. Initially, we attempted several approximations for FBPC, especially for diagonal approximations of $\Sigma_x$ and $\Sigma_u$, which resulted in instability and hindered convergence. Employing empirical variance proved more stable for FBPC computation. For instance, using Jacobian and Fisher Information Matrix for a diagonal covariance in the CIFAR-10 ipc10 setting resulted in 56.97% accuracy for FBPC, which was less favorable than our method. This approach introduces additional hyperparameters, complicating optimal settings identification. **[G5] (Concerns of Gaussian approximation)** As highlighted by Reviewer 9hB5, there's no assurance that the actual posterior distribution would perfectly align with the variational Gaussian distribution used in function space approximation. Nonetheless, it's crucial to recognize that approximations are required for effective objective computation. Employing linearization and variational Gaussian distributions in function space is a practical strategy to manage scalability and enable feasible computations [1]. Although this approximation introduces some bias, it's a necessary compromise to address the challenges of high-dimensional models. In the context of our work, exploring more sophisticated techniques, such as normalizing flows [2,3], to design more complex posterior distributions is indeed a promising direction for future research. --- **References** [1] Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh and Yarin Gal. “Tractable Function-Space Variational Inference in Bayesian Neural Networks.” NeurIPS 2022. [2] Kristiadi et al. “Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks.” NeurIPS 2022. [3] Chen et al. “Bayesian inference via sparse Hamiltonian flows.” NeurIPS 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Greatness in Simplicity: Unified Self-Cycle Consistency for Parser-Free Virtual Try-On
Accept (poster)
Summary: This work proposes a virtual try-on method. At inference time, no parser information (e.g., pose, segmentation map) is needed; given a person's image and a garment, the method is able to output a try-on image. To learn this model with a dataset of paired person and garment images, a deformation network is learned to warp the garment to match the person. Meanwhile, a generator is learned to combine the warped new garment and the person to create the try-on image. The generator is trained with a self-cycle-consistency objective, and the intuition is simple: a try-on image should be within the distribution of real images (adversarial loss), and trying on the person's original cloth on an already tried-on image should look like the original image (cycle-consistency loss). The method has multiple other components in it, making it hard to add this to the summary. The proposed method is compared with prior work and shows similar performance with the state of the art. Strengths: 1. From the qualitative comparisons, using ground truth deformation field as supervision instead of other alternatives (appearance flow, TPS) seems to help reduce artifacts. Especially in the case where the clothes have stripes. 2. The work proposes a good idea to use an auxiliary deformer, that takes in only pose information (without RGB) of the person as input to predict the garment deformation. It makes sense that it will help generalizing perdiction when the garment is not paired with the person (the training set is all paired up). Since at inference time, only RGB input are given, this auxiliary deformer is used to generate pseudo ground truths for unpaired garment and person images. In short, auxiliary deformer "teaches" the final deformer to generalize better, which is a good idea. Weaknesses: 1. In the related work section, it is mentioned that using parser information can introduce unreliable errors and artifacts. And even with teacher-student training, this issue can still lead to robustness issues. Meanwhile, I do not understand why the proposed method alleviates the issue. The auxiliary deformation network, which is essential to the method (see Fig. 5), is trained with parser information (dense pose). It will be great to clarify this. 2. Writing is hard to understand. It prevents readers from understanding what exactly the algorithm is. See the points below: 3. In line 189, the Lsec loss is refered to a citation without mentioning what exactly it is. Is it precisely the regularization term in Eq. 2? 4. The skin region is mentioned in the method section (s_p, p _s). How do you locate, define, or detect these regions? Same question holds for the content preservation loss: how do you locate the invariant content region (head, trouser, etc.)? 5. Is there any reason that the cycle-consistency loss does not need to be separated into skin, content and garment reconstruction loss? It seems compatible, and I don't understand why the loss function design cannot be applied symmetrically. 6. In figure 5, I do not understand the visualization in the left column of SIG w/o MRF. Having an explanation of the visualization is nice. Also, although I understand other visualziations are representing the deformation field, it is still good to mention it in the caption. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. The paper mentioned that prior work uses two CNNs to train models cyclically, while the proposed method uses one CNN to train a self-cycle. However, is there a reason why training one model directly with self-cycle consistency will not work? Also, I am guessing that the skin, content, and garment reconstruction loss is one of the important components to make training converge to a reasonable solution. It will be great to confirm this. 2. In my opinion, the draft needs a major update, especially in the method section. If the revision explains the design choices more clearer, I am happy to change my ratings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: 1. The paper mentions that it is challenging to achieve good convergence during end-to-end training. Does this mean that a two-stage training (pretraining NGD then training SIG) is required for good results? Or does this mean in general the current method does not attain good convergence? 2. In line 290, what does it mean by closed-form solution here? ----------------- Raised score to a borderline accept. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Thank you for your diligent efforts and valuable suggestions. --- ### Weaknesses: - **W-A1:** Thanks for your comments. Datasets consist of paired garment-person images $(g,p)$, where $p$ wearing $g$. Current methods use the person representation $I_p$ and $g$ to find dense spatial correspondences between $g,p$. However, $I_p$ lacks $p$'s depth and spatial info, leading to rigid deformation and fitting. We attempted to directly train main deformer $G_\theta$ using $(g,p)$, this is because extracted $p$'s features contain rich spatial and depth info, which flat $I_p$ does not possess. However, $p$ also introduce the same shape, color, and texture correlation features as those of $g$ (strong correlation), which hinders the generalizability of $G_\theta$. - Therefore, we train $G_A$ and $G_\theta$ using $(g,p)$, allowing the rich spatial and depth knowledge in $G_\theta$ to guide the convergence of $G_A$. Note that the strong correlation in $G_\theta$ is not learned by $G_A$, as $G_A$'s input ($I_p$, dense pose) does not contain clothing's shape, color, and texture info. Thereby alleviating the impact of irresponsible prior knowledge on $G_A$ stemming from erroneous $I_p$. Then, we retrain $G_\theta$ using $G_A$ to eliminate ineffective strong correlations in the latent feature space of $G_\theta$. In this way, we achieve an efficient $G_\theta$. --- - **W-A2:** We apologize for our writing issues. We previously lacked detailed explanations. Now, we have conducted extensive experiments and provided additional explanations to improve its readability. --- - **W-A3:** We apologize for any confusion caused by us. Yes, on lines 188 and 189, we clarify that $R_{sm}$ is implemented by $L_{sec}$, which indeed corresponds to the regularization term in Eq. (2). --- - **W-A4:** Thanks for your comments. Datasets contain garment $g$ and its mask $p_M$, person $p$ and its paring map $p_p$. $p_p$ includes several layers, the neck and arms layers are merged to form the skin layer, denoted as $p^n_p+p^a_p=p^s_p$. And the skin and garment layers are merged to form the agnostic layer, denoted as $p^s_p+p^c_p=p^u_p$. Then, $p_M$ that deforms along with $g$ is used as the new target clothing layer $\hat p_M$, and subtracting $\hat p_M$ from $p_p^u$ yields the new target skin layer $\hat p_p^s$ corresponding to $g$, i.e., $\hat p_p^s=p_p^u-\hat p_M$. $s_p$ is fixed skin region before and after trying on, which can be obtained by intersecting $\hat p_p^s$ and $p_p^s$, denoted as $s_p=(p_p^s \cap \hat p_p^s)\times p$. $\hat p'_s=\hat p_p^s \times \hat p$, where $\hat p$ is results generated in first stage. In this manner, the invariant content region (head, trouser, etc.) are also located by $p^h_p$, $p^t_p$ in $p_p$. - Note that during training and inference stages, we do not require the parser to estimate new human parsing as input. This perfectly aligns with the requirements of parser-free virtual try-on. --- - **W-A5:** Thanks for your comments. As shown in Fig. 1 (https://github.com/anony-conf/results-USC-PFN), we initially define a cycle-consistency loss $L_{scyc}$, which is employed to supervise the reconstruction of GT $p$. This global loss is used to preserve the structural and distributional consistency between the reconstructed $p'$ and $p$, ensuring a harmonious coordination of generated skin, garment, and the overall appearance. However, it cannot specially focus on the generation of localized body regions, particularly in first stage. Since there is no GT available to calculate $L_{scyc}$, we introduce only the skin, content, and garment reconstruction losses in first stage to optimize these regions that can be supervised. Furthermore, due to the weight sharing between the two stages, designing symmetrical loss functions becomes unnecessary. --- - **W-A6:** Yes, the left column of SIG w/o MRF is the deformation field $f$, indicating the effects after removing $f$ generated by NGD during the try-on synthesis stage of SIG. --- ### Questions: - **Q-A1:** Thanks for your comments. As shown in Fig. 1 (https://github.com/anony-conf/results-USC-PFN), in first stage, the try-on result $p'$ do not have corresponding GT in dataset. If we do not supervise this stage and directly use $p'$ as input for the second stage, it can be considered as a direct mapping from input $p$ to output $p$. However, our task requires achieving the cyclic mapping $p$→$p'$ and $p'$→$p$. Therefore, directly applying self-cyclic consistency training in the second stage will not work. - These reconstruction losses are crucial components, 1) there is a need to enforce a mapping of the outputs from the first stage into the domain of $p'$, requiring the formulation of local losses for the first stage; 2) the second reason is the same as **W-A5**. --- - **Q-A2:** We sincerely appreciate your professional and responsible comments and suggestions. Based on the feedback from all the reviewers, we have made major revisions to our manuscript, including the addition of the required experiments. Therefore, we earnestly implore that you reconsider the ratings of our paper. --- ## Limitations: - **L-A1:** Thanks for your comments. To demonstrate the necessity of two-stage training, we conducted end-to-end training experiments to demonstrate that this manner cannot achieve the same favorable convergence as the two-stage approach, both NGD and SIG did not converge, resulting in an extremely low SSIM score of 0.37 (see Fig. 8.5 (a)). We also computed the FID and KID values on the test set during training to reflect the convergence of our manner (see Fig. 8.5 (b)). This successfully validates all the points in **Q-A1**. --- - **L-A2:** The term "closed-form solution" in line 290 refers to our proposed self-cycle consistency pipeline. --- > We sincerely thanks for your dedicated efforts. We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. --- Rebuttal Comment 1.1: Title: More clarifications Comment: Thanks so much for the clarifications. Having these in the revised draft will strengthen the paper a lot. To confirm, are all the color-coded visualization in Fig. 5 deformation fields? Is the deformation prediction for SIG w/o MRF simply not performing well? Thanks! --- Reply to Comment 1.1.1: Comment: > Thank you very much for your professional and careful feedback. Yes, all the color-coded visualizations in Figure 5 represent deformation fields. In other words, it is the visual results of the deformation fields. > Regarding the SIG w/o MRF results, the visual results of SIG w/o MRF in Figure 5 are the worst, as the given clothing undergoes only slight deformation, resulting in very smooth stripes and severe misalignment of logos on the clothes. > Thank you once again for investing significant effort in reviewing our work. Your insightful and professional feedback has greatly improved the quality of our paper.
Summary: This paper proposes a new parser-free virtual try-on network (USC-PFN) to use only unpaired images as input to generate realistic try-on results. To address the core warping problem in virtual try-on, it models the deformation field estiamtion by using the Markov Random Field. To train the try-on generator by using unpaired data, it proposes a self-cycle consistency pipeline. Extensive comparisons with the state-of-the-art methods on VITON benchmark demonstrate its superiority and the ablation study also shows the effectiveness of different modules in the proposed method. Strengths: - This paper explores the parser-free virtual try-on, which is challenging and with great significance for the image-based virtual try-on. - For the first time, this paper introduces the Random Markov Field into the non-rigid garment deformer, which is quite different from the deformation module in previous methods. - It proposed a novel self-cycle consistency pipeline for the training of the try-on generator by using unpaired images. - The authors conduct extensive comparisons with the existing state-of-the-art methods to illustrate the superiority of the proposed methods. The ablation study also show the effectiveness of the different modules in the proposed method. Weaknesses: - One core technical contribution in this paper is that it model the garment warping by using the Markov Random Field. However, the authors do not provide an clear explanation about it. What is the difference between it and the widely used appearance flow? why can it outperform the TPS-based or appearance flow-based methods? - The authos only conduct experiments on the VITON benchmark, in which the image resolution is quite low (256 x 192). However, most of the advanced methods focus on the higher resolution virtual try-on (e.g., 512 x 384, 1024 x 768), which are more closed to the real world try-on scanerio. - The writing is not straightforward amd several descriptions are a bit obscure. For example, (1) Although the garment deformer and the auxiliary deformer receive different inputs, in Figure 3, they seems to take the same inputs (i.e., person image and the garment image). (2) In line131, it is confused how to obtain the deformation field $\tilde{f}$. - Some decipt in the main paper is a bit overclaimed. In line 300, the authors claim that USC-PFN can solely rely on the garment and human image for training. However, it still required some human condition like human parsing, densepoe during training NGD and SIG. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - For the non-rigid deformation module, is the pre-trained deformer necessary for the training of the final deformer? In my opinon, it is unnecessary to use supervision provided by the pre-trained deformer (which is trained by using paired images) for the training of auxiliary deformer, since the ground truth deformation field is not necessary for the training of deformer (velidated in the previous works like PF-AFN[1], FS-VTON[2]). Once obtaining the auxiliary deformer, we could train the final deformer by using the unpaired images. - When training the Self-cyclic Image Generator (SIG), several local region supervisions (i.e., skin loss, garment loss, preserved content loss) are employed to facilitate the self-supervised cyclic training, which should resort to the human parsing to obtain the specific local region. Thus, the parsing error might still affact the training procedure. However, some knowledge distillation method like PF-AFN[1] does not face with parsing error issue since they only use the global supervision when the the parser-free student network. My question is how does USC-PFN alleviate the influence derived the parsing error during training. - Since the SIG can be trained with unpaired images, is it possible for USC-PFN to leverage large amout of unpaired images from Internet during the trianing of SIG? Will such the increasing in training data facilitate the performance of USC-PFN? [1] Ge et al. "Parser-Free Virtual Try-on via Distilling Appearance Flows", CVPR 2021. [2] He et al. "Style-Based Global Appearance Flow for Virtual Try-On", CVPR 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some limitations of this paper have been disscussed in the main paper. Other limitation I concern is the image resolution, since it is quite improtant for the real world try-on scanerio. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Thank you for your diligent efforts and valuable suggestions. --- ### Weaknesses: - **A1:** Thanks for your comments. Datasets consist of paired garment-person images $(g,p)$, where $p$ wearing $g$. Existing methods use the person representation $I_p$ and $g$ to find dense spatial correspondences between $g$ and $p$. They adjust garment shape to minimize shape differences between deformed garment $\hat g$ and the Ground Truth (GT, garment on $p$). Due to the lack of GT flow field, appearance flow (AF) -based methods indirectly predict the flow field $F$ by the supervision of warped garment's shape ($\hat g$) [19], i.e., calculate the loss between $\hat g$ and GT. Two issues arise: 1) $I_p$ lacks $p$'s depth and spatial info, leading to rigid garment deformation and unrealistic fitting. 2) Optimizing $F$ indirectly by optimizing $\hat g$'s shape with pixel similarity disregards structural and depth correlations between $\hat g$ and $p$, causing excessive deformation. Similarly, TPS-based methods predict control points to calculate the same flow field. So, the drawbacks of both methods are the same. And TPS is more rigid. - Our method tackled the problems. 1) We use $p$ instead of $I_p$ as input to input depth and spatial info. But $p$ and $g$ are paired, to solve this issue, 2) we introduce Markov Random Field (Eq. 1) for clothing deformation, which supervises the estimated deformation field $f$ by GT field $\bar f$. However, $\bar f$ does not exist in the dataset. Therefore, we introduce the auxiliary deformer $G_A$ to learn depth and spatial info from $p$. $G_A$, pretrained with densepose descriptor $p_d$, is used to remove irrelevant priors (color, texture, and shape features of $p$'s garment) in extracted features of $p$, ensuring only depth and spatial info remains. Thus, GT field $\bar f$ can be generated by $G_A$. - In summary, our approach directly eliminates relevant info from the feature of $p$ containing rich spatial and depth info, and employs MRF principles to directly supervise $f$, enhancing the model's spatial awareness. We provide both qualitative and quantitative results on low-resolution and high-resolution datasets in **A2**, to demonstrate the effectiveness of our deformer. --- - **A2:** Thanks for your suggestion. To demonstrate the effectiveness of our method, we have added qualitative and quantitative results of both NGD and SIG on VITON-HD dataset in https://github.com/anony-conf/results-USC-PFN , Sec. 1 to 5. --- - **A3:** (1) We sincerely apologize for the ambiguity in Fig. 3. There indeed are differences in their inputs, as explained in section 3.2 and **A1**. The garment deformer takes person $p$ and garment $g$ as inputs, while the auxiliary deformer takes the densepose descriptor $p_d$ of $p$, and $g$ as inputs. We have revised Fig. 3 to enhance its clarity. (2) The deformation field $f$ is directly generated by the $G_\theta$, i.e., $G_\theta(g,p)=f$, and $\hat g$ is obtained by bilinear sampling of $g$ using $f$. --- - **A4:** We deeply apologize for our inaccurate description. We intended to convey that USC-PFN can indeed be trained and inferred solely based on garment and person images **as input**. --- ### Questions: - **A1:** Thanks for your professional suggestion. Our auxiliary deformer $G_A$ is necessary, this is because we attempted to directly train the main deformer $G_\theta$ using paired ($g$, $p$), but features extracted from $p$ contain abundant shape, color, and texture correlations with $g$, which hinders the generalizability of $G_\theta$. Consequently, we had to separately train $G_A$ to eliminate latent shape, color, and texture correlations from $p$'s features, enabling feasible training with paired images. The specific process is detailed in above **A1**. --- - **A2:** Thanks for your insightful comments. Firstly, we employ a global self-cycle consistency loss $L_{scyc}$, which does not introduce parsing errors. For skin loss, we utilize a pre-trained SR trained using perceptual loss to generate skin regions, it produces clean skin outputs even when there are cloth-related impurities in the input. This is fully in accord with the consistency of the distribution of skin in perceptual loss. For garment loss, it is only used in first phase. We segment the garment area of $\hat g$ and $p$ using the mask of $\hat g$. As our network's input is also segmented $\hat g$, this loss ensures direct $\hat g$ output by SIG, so it only possibly introduces white boundaries caused by parsing errors. We address this via $L_{scyc}$ in second phase, which enforces supervision via GT without such boundaries. Preserved content loss is used to preserve the fixed region, it is penalized by skin loss and garment loss for regions that are incorrectly segmented. Even if preserved content is incomplete, SIG's extensive training with massive data mitigates the impact. - If possible, please refer to Sec. 8.4 (link above) for a more comprehensive explanation. --- - **A3:** Thanks for your constructive comments. During training of SIG, USC-PFN can leverage large amout of unpaired images from Internet. This facilitates the performance of USC-PFN. We augmented the VITON-HD dataset with additional VITON data, see the table below. The experimental results validate your point. |Methods|SSIM ↑|FID ↓|KID ↓| |:--:|:--:|:--:|:--:| |Ours (VITON-HD)|0.901|9.08|0.142| |Ours (VITON-HD + VITON)|0.906|9.01|0.131| --- ### Limitations: - **A1:** Thanks for your suggestion. We have added experiments on the high-resolution VTON-HD dataset , and specific qualitative and quantitative results can be found in https://github.com/anony-conf/results-USC-PFN , Sec. 1 to 5. --- > We sincerely thanks for your dedicated efforts. We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. If you have more questions or encounter broken links, please comment, and we'll assist you quickly. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer RUsU Comment: Thanks for the detailed response. However, I still feel confused about some technical details. First, What is the essential difference between the proposed Markov Random Field and the widely used appearance flow? After reading the paper and the authors' rebuttal, I argue the Markov Random Field is quite similar to the appearance flow expect for its network inputs (i.e., receiving image rather than the person representation), training strategy (i.e., using unpaired images as training data, introducing an auxiliary deformer), and the loss functions. Second, in the authors' rebuttal, the authors argue the person representation used in the previous appearance flow-based deformation methods lack depth and spatial information. However, methods like PF-AFN and FS-VTON takes densepose pose as the deformation netowork's input, which I argue can also provide the depth and spatial information. Third, the authors seem to misunderstand my concern in Question 1. I agree with the author that using an auxiliary deformer is necessary to provide pseudo gt. My question is why should we use the paird image to pre-train the deformer? (as mentioned in line 163-164). In my opinion, given the pseudo gt deformation field, the deformer could be directly trained by using the unpaired images. Forth, What is the specific implementation of $D$ in equation 1? --- Reply to Comment 1.1.1: Comment: > **A1:** Thanks for your comments. Markov Random Field (MRF) is widely employed in image registration, we extend this concept to the treatment of deformation fields ($f$), where each element (data term) of $f$ is associated with unary clique and bi-clique, representing the state of the element itself and its relationships with its neighboring elements, respectively. Ideally, accurately estimating the best state for each element ensures optimal alignment, which can solve occlusions. Effectively managing the relationships between elements and their neighborhoods can address excessive deformations. > For the unary clique, we optimizes each data term by minimizing the sum of absolute differences (SAD) between the estimated $f$ and ground truth (GT) $\bar f$. However, due to the absence of GT $\bar f$ in datasets, it is not feasible to directly compute SAD. Therefore, we employ an auxiliary $\mathcal G_A$ to provide us with pseudo-GT $\bar f$. $\mathcal G_A$ takes $C$ and the densepose $I_d$ (without UV map) of the person $I$ wearing $C$ as inputs. However, $I_d$ is the semantic map of $I$, which lacks ***depth information and 3D spatial information of $I$. Specifically, perspective changes, texture variations, shadows and lighting, and depth of field present in $I$ are absent in the $I_d$.*** > **A2:** Similarly, although PF-AFN and FS-VTON also use densepose $I_d$ (without UV) as input, $I_d$ lacks aspects present in person images such as perspective changes, texture variations, shadows and lighting, and depth of field. Therefore, PF-AFN, FS-VTON, and our $\mathcal G_A$, all lack the 3D spatial and depth information of $I$. So, the warped results obtained by theirs teacher models mainly focus on deforming $C$ to align with the clothing and arm semantic layers of $I_d$, without considering whether the warped result corresponds naturally and reasonably to the spatial structure of the human body. This can be observed in our qualitative results, where the results of these methods show some cases where clothing fabric is excessively stretched from the abdomen to the arm area, which also leads to occlusions. > To address this, we initially co-trained our main deformer $\mathcal G_\theta$ and $\mathcal G_A$. Here, $\mathcal G_A$ takes $C$ and densepose $I_d$ as inputs, $\mathcal G_\theta$ takes paired $(C, I)$ as inputs to extract depth and 3D spatial information of $I$. We use the outputs of $\mathcal G_\theta$ to supervise $\mathcal G_A$, denoted as: $\mathcal G_\theta(C,I)=f$, $\mathcal G_A(C,I_d)=\bar f$, calculating $loss[f, \bar f]$ for $\mathcal G_A$, **to supplement the lack of depth and 3D spatial information in $\mathcal G_A$.** > **A3:** However, pseudo-GT $\bar f$ is never as accurate as the real GT, which can limit the improvement of clothing deformation, and the usage of flawed $I_d$ as input can impact the deformation outcomes. As input, $I$ does not have these issues. However, there is a strong correlation between paired $C$ and $I$ because their clothing shares the same colors, textures, and shapes. This means that a deformer trained using $(C, I)$ exhibits high coupling. In other words, it becomes ineffective when faced with unpaired data. Additionally, utilizing unpaired data as input can address the above issue. However, unpaired data lacks local awareness between clothing and the human body. In other words, the deformer doesn't know which part of the clothing corresponds to the neckline and which part corresponds to the sleeves. Paired data, on the other hand, can ensure this alignment based on color and texture consistency. As a result, our approach achieves good deformation results even on complex curved arms, and the alignment of necklines is also relatively accurate. > To combine the advantages, we incorporate both paired and unpaired data in the architecture of the entire cycle training. Specifically, the training of deformer is divided into two stages: the first stage employs paired data to retain the ability of $\mathcal G_\theta$ to perceive the depth and 3D spatial information, while the second stage utilizes unpaired data as input (this stage is similar to PF-AFN and FS-VTON, with the difference that we use cycle consistency to continuously adjust pseudo-GT $f$, thereby refining deformations) to overcome the limitations imposed by the pseudo-GT $\bar f$. > In the first stage, after training $\mathcal G_A$, we retrain $\mathcal G_\theta$ and allow $\mathcal G_A$ to supervise $\mathcal G_\theta$. This step aims to have $\mathcal G_A$ penalize the strong correlations in the latent feature space of $\mathcal G_\theta$, i.e., calculating $loss[f, \bar f]$ for $\mathcal G_\theta$. > For the bi-clique, we apply improved regularization to constrain $f$. > **A4:** The $D$ is implemented by the L1 norm and perceptual loss for the second stage stated in A3 above. > I apologize once again for any confusion caused by us. If you have any further concerns, please reach out to us. Thank you!
Summary: The paper addresses the challenges in generating high-quality virtual try-on images, specifically focusing on non-rigid garment deformation and unpaired garment-person images. Existing methods rely on disentangling garment domains with the aid of "teacher knowledge" or dual generators, which can limit the quality of try-on results. Additionally, current garment deformation techniques struggle to mimic natural interaction between garments and the human body, resulting in unrealistic alignment effects. To overcome these limitations, the authors propose a Unified Self-Cycle Consistency for Parser-Free virtual try-on Network (USC-PFN). USC-PFN utilizes a single generator and incorporates Markov Random Field for more realistic garment deformation. Strengths: - The paper is easy to understand. - The experiment results seems okay, but not excellent compared to the baselines. - The proposed method requires less computational costs compared to the baselines such as ACGPN and DCTON during inference. Weaknesses: - The primary limitation of this paper lies in the lack of significant performance improvement compared to the existing virtual try-on baselines. Particularly, when examining the performance comparison in Table 1, it raises doubts about the meaningful enhancement in terms of FID, which is even higher than that of PF-AFN. Furthermore, the disparity in frames per second (FPS) between PF-AFN and the proposed approach is not significant. - The proposed self-cyclic image generator is inspired by StarGAN; however, it is unclear what specific advantages it offers. It appears to train a generator with clothing as a condition instead of employing other models that utilize cycle consistency loss. Nevertheless, it is uncertain whether this approach is novel and impactful enough to be presented at a top-tier conference.. - The newly proposed non-rigid garment deformer based on Markov Random Field lacks sufficient examples to demonstrate its superior warping capabilities compared to existing warping methods. Various virtual try-on methods have also proposed different approaches to enhance the performance of warping. However, it remains unclear how the proposed method establishes its superiority in this regard qualitatively and quantitatively. - Most recent virtual try-on models have demonstrated their performance on high-resolution datasets such as VITON-HD [1] or Dresscode [2]. Additionally, parser-free virtual try-on models like DC-VTON have also shown their performance on images of around 512x384 resolution. However, this paper is lacking references to related works and only presents evidence of its performance on the low-resolution VITON dataset, which is somewhat disappointing. [1] VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization, CVPR 2021 [2] Dress Code: High-Resolution Multi-Category Virtual Try-On, ECCV 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Does the paper include a comparative analysis between the proposed warping method and other existing warping methods? I contend that it is imperative to conduct a direct comparison, encompassing both qualitative and quantitative evaluations, with well-established warping techniques such as TPS transformation and Appearance Flow. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Thank you for your diligent efforts and valuable suggestions. --- ### Weaknesses: - **W-A1:** Thanks for your comment. Our method significantly outperforms both RT-VTON (CVPR 2022) and SDAFN (CVPR 2022), which are more advanced than PFAFN. Moreover, our method outperforms PFAFN in KID and SSIM. We compare the performance improvements of the three methods over PFAFN; see Table 1. |Table 1|SSIM↑|FID↓|KID↓|SSIM*|FID*|KID*| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |PFAFN|0.89|10.06|0.264|/|/|/| |RT-VTON|-|11.66|-|-|- 14.2%| - | |SDAFN|0.88|12.05|-|- 1.1%|- 18.0%| - | |**Ours**|**0.91**|10.47|**0.249**|+ 2.2%|- 4.1%| + 5.7%| - In Table 1, our method shows the most significant improvement. In addition, we introduced a publicly available augmented VITON test set (see Table 2) and the high-definition VITON-HD dataset for a comprehensive and fair evaluation. |Table 2|SSIM ↑|FID ↓| |:--:|:--:|:--:| |ACGPN|0.81|20.75| |Cloth-flow|0.86|13.05| |PF-AFN|0.87|12.19| |**Ours**|**0.90**|**10.48**| - In Table 2, the SSIM of our method reaches 0.90, an increase of 3.4% compared to PFAFN. The FID is 10.48, which is a reduction of 14% compared to PFAFN. Next, we counted the performance of baseline methods on VITON-HD (512$\times$ 384) (see Table 3). |Table 3|Pub.|Parsing as input|SSIM ↑|FID ↓|KID ↓| |:--:|:--:|:--:|:--:|:--:|:--:| |CP-VTON|ECCV 2018|Y|0.791|30.25|4.012| |ACGPN|CVPR 2020|Y|0.858|14.43|0.587| |DCTON|CVPR 2021|Y|0.810|15.55|/| |VITON-HD|CVPR 2021|Y|0.843|11.64|0.300| |HR-VITON|ECCV 2022|Y|0.878|9.90|0.188| |**Ours**|/|**N**|**0.899**|**9.10**|**0.159**| - In Table 3, our approach has also surpassed the SOTA method, HR-VITON, this demonstrates the significant improvements in performance, highlighting the significance of our method. **The qualitative results on the VITON-HD are in Fig. 4 (https://github.com/anony-conf/results-USC-PFN).** - Furthermore, regarding FPS, we mentioned in line 275 of the paper that the parameters, FLOPs, and FPS of our network can vary with the used network. We intentionally reduced the *ngf* parameter in NGD (see Table 4), both parameters and FLOPs are significantly reduced. |Table 4|#Params|FLOPs|FPS|FID-Clothing ↓|SSIM-TryOn ↑| |:--:|:--:|:--:|:--:|:--:|:--:| |ACGPN|139M|206G|10|42.10|0.84| |DCTON|153M|194G|19|42.80|0.83| |PF-AFN|99M|69G|34|22.81|0.89| |Ours (All ngf=64)|140M|46G|39|18.70|0.91| |Ours (NGD ngf=32)|**87M**|**29G**|**40**|**18.85**|**0.91**| - We also calculated the complexity on VITON-HD (see Table 5). |Table 5|Weight|#Params|FLOPs|FPS| |:--:|:--:|:--:|:--:|:--:| |VITON-HD|588M|154M|1689.7G|3.76| |HR-VITON|586M|148M|1555.4G|4.09| |**Ours**|**334M**|**87M**|**467.9G**|**22.19**| --- - **W-A2:** Our method offers a novel viewpoint for virtual try-on tasks. In comparison to previously proposed solutions like disentangled cycle consistency [A] and knowledge distillation [B], its advantages are summarized as: 1) [A] uses dual-network during training, but only uses one network for inference, leading to convergence challenges and increased computational overhead. In contrast, our method achieves efficient convergence with a single shared-weight network, requiring fewer parameters and FLOPs. 2) [A] relies on human parsing as input during both training and inference, making the results sensitive to incorrect parsing. In [B], teacher network training and inference depend on the parser, where erroneous human parsing in teacher knowledge can mislead the student network. Our approach only takes garment and person images as input, thus avoiding above issues. 3) [A] employs disentangled cycle consistency to segment clothing and skin, which may cause artifacts at boundaries after their synthesis. Our method directly uses person images as input, avoiding this issue. 4) [A] introduces a multi-encoder network for feature extraction, adding complexity. [B] employs a complex clothing deformator for clothing deformation. Our approach can utilize any generator as our training network, achieving results surpassing [A] and [B]. 5) [A] uses a globally deformable STN network for clothing deformation. [B] employs less controllable appearance flows for clothing deformation. We introduce an MRF-based clothing deformator achieving SOTA performance. 6) Our method outperforms the methods based on [A] and [B] in terms of realism, algorithmic complexity, model size, FPS, and generalization. - In summary, our approach possesses significant advantages and impact to offer a novel solution for virtual try-on tasks. --- - **W-A3:** We have added qualitative and quantitative results on VITON (see Table 6) and VITON-HD (see Table 7) datasets. Qualitative results are shown in Fig. 3 and 4 (https://github.com/anony-conf/results-USC-PFN). |Table 6|Pub.|Warping|FID-P ↓|KID-P ↓|FID-UP ↓|KID-UP ↓| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| CP-VTON|ECCV 2018|TPS|43.95|2.233|42.13|2.112| ACGPN|CVPR 2020|TPS|42.10|2.009|41.48|2.048| DCTON|CVPR 2021|TPS|42.80|2.170|42.19|2.126| PFAFN|CVPR 2021|AF|22.81|0.785|23.90|0.860| SGAFN|CVPR 2022|AF|20.07|0.552|20.38|0.481| **Ours**|/|MRF|**18.70**|**0.390**|**19.50**|**0.355**| --- |Table 7|Pub.|Warping|FID-P ↓|KID-P ↓|FID-UP ↓|KID-UP ↓| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |VITON-HD|CVPR 2021| TPS|32.968|1.407|32.93|1.353| |HR-VITON|ECCV 2022|AF|25.499|0.926|24.826|0.759| |**Ours**|/|MRF|**19.060**|**0.418**|**22.861**|**0.504**| --- - **W-A4:** Thanks for your suggestion. We have added qualitative and quantitative experiments on the high-resolution dataset VITON-HD with a resolution of $512\times 384$, as shown in Table 3, and Fig. 3 and 4 (https://github.com/anony-conf/results-USC-PFN). --- ### Questions: - **Q-A1:** We have added direct comparison experiments with well-established warping techniques such as TPS and Appearance Flow, please refer to **W-A3**. --- > Thanks for your dedicated efforts. We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. --- Rebuttal Comment 1.1: Title: Question on Table3 and Table7 Comment: I am grateful to the authors addressing my concerns effectively through various experiments in the rebuttal phase. If these experiments are adequately incorporated into the revised version of the paper, it is anticipated that they will enhance the quality of the paper significantly. I have some questions about the different outcomes observed between the Table 3 and Table 7. In Table 3, FID and KID for HR-VTON appear to align with the values reported in the original paper of HR-VTON. However, the results of Table 7 (in both paired and unpaired settings) are different from those of Table 3. The results presented in Table 3 appear to have been measured under the same conditions as HR-VTON. It would be beneficial to provide a detailed explanation of the method employed to measure the results presented in Table 7. Additionally, there is a notable disparity between the FID and KID metrics in Table 3 compared to the corresponding metrics in Table 7. It is recommended to offer a clear elucidation regarding the factors that have contributed to such divergent outcomes. Moreover, the proposed model demonstrates substantial improvements over VITON-HD and HR-VTON models in terms of FPS and FLOPS. It would be insightful to understand which specific module within the proposed architecture has significantly reduced the computational cost, whether it is attributed to the proposed warping module or the potential computational efficiency achieved within the image generator. Conducting an analysis to elucidate this aspect would further enhance the comprehensibility of the advancements achieved by the proposed model. --- Reply to Comment 1.1.1: Title: Answer on Table 3 and Table 7, and Computational Complexity Comment: > **A1:** Thank you very much for your professional and careful feedback. I sincerely apologize for any confusion caused by our unclear presentation. We would like to clarify that **Table 3 is the quantitative results table for the entire network** on VITON-HD dataset, i.e., the quantitative results of the final virtual try-on result images. Apart from our data, the remaining data in the table were obtained from the official HR-VITON paper. **Table 7, similar to Table 6, is the quantitative results table for the non-rigid garment deformer** on VITON-HD dataset, i.e., the quantitative results of the warped garment images. Therefore, ***there is a notable disparity between the FID and KID metrics in Table 3 compared to the corresponding metrics in Table 7***, as Table 3 and Table 7 correspond to the Self-cyclic Image Generator (SIG) and Non-rigid Garment Deformer (NGD), respectively. > Furthermore, since the official papers of VITON-HD and HR-VITON did not include quantitative results for the garment deformer, we obtained their official codes and weights, and evaluated the garment deformation results under the same configurations as VITON-HD and HR-VITON. We apologize again for any confusion caused by our unclear table explanations. > **A2:** Thank you very much for your professional and constructive suggestions. Our work primarily proposes a new solution distinct from inpainting, cycle consistency, and knowledge distillation, which we term as "self-cycle consistency." Please refer to Figure 1 in the paper for more details. > Our proposed framework does not rely on the parser during the inference stage, currently, only the Knowledge Distillation architecture can achieve this. In knowledge distillation architecture, the parser-based teacher model imparts prior knowledge by providing the generated try-on results as pseudo ground truth or pseudo unpaired input to the student model. > Differently, our framework employs weight sharing in the two cyclic stages, allowing the try-on results generated in the first stage to be used as input for the second stage to reconstruct the person image achieving consistency. This is distinct from traditional cycle consistency, as implemented in DCTON [2], which has the following features: 1) It employs two non-shared dual networks for training, while only using one during inference (bearing some resemblance to knowledge distillation architecture). Simultaneously training two dual networks is challenging due to one network lacking ground truth. 2) It relies on human parsing as input, necessitating an additional parser during inference to generate corresponding human parsing for input. 3) Its architecture includes a parser (mask prediction network [2]) to generate clothing and skin masks, but flawed human parsing has been proven to lead to erroneous try-on results [6]. These three aspects can be considered shortcomings of [2], which our framework does not have. --- > Our framework consists of the Self-cyclic Image Generator (SIG) and the Non-rigid Garment Deformer (NGD). Due to the nature and features of our architecture, theoretically, any encoder-decoder network (such as ResUnet, Unet) can serve as our SIG and NGD. In our experiments, we used ResUnet for them, without incorporating any complex modules. Therefore, ResUnet can be replaced with a lighter Unet, significantly reducing computational costs without compromising performance. > On the other hand, HR-VITON incorporates a complex Try-On Condition Generator with carefully designed Feature Fusion Blocks and Condition Aligning to deform clothing, addressing occlusions and unnatural deformations. However, our experimental results demonstrate that our approach effectively resolves these issues using a standard ResUnet, and the results obtained are superior to HR-VITON. Similarly, its Try-On Image Generator employs a complex structure, resulting in higher parameters, FLOPs, and lower FPS. In contrast, our approach employs ResUnet or Unet for clothing deformation and try-on image generation, achieving better results than HR-VITON while significantly improving computational efficiency. - Therefore, we would like to clarify that our contribution lies in proposing an efficient architecture rather than meticulously designing specific modules to enhance result quality. Extensive experimentation substantiates the effectiveness of the architecture we have proposed. **We apologize for our extensive elaboration above, as we genuinely intended to address your concerns. Once again, we sincerely thank you for your careful review and professional feedback.**
Summary: The paper presented a system for image based virtual try-on. The main contribution consists (1) a parser free virtual try-on network trained with unpaired data; (2) a MRF based deformation estimation network; (3) a cycle consistency based training method. The paper performed experiments on the VITON Zalando dataset. The paper is evaluated with several prior work as baselines, including VITON[8], CP-VTON[3], Cloth-flow[9] etc. The main evaluation metrics are SSIM for pair data and FID for data with no ground truth. In terms of quantitative performance, the proposed method achieves better or comparable results than previous art. In terms of qualitative results, the generated images are visually comparable, and some times slightly better than previous work. Strengths: + The paper is addressing an interesting problem. + The main idea is interesting and seems novel to me. Although cycle consistency are not new ideas and it has been used in previous work on virtual try on, the overall combination of ideas seems still new in this specific domain. + Extensive experiment shows state-of-the-art quantitative results. The qualitative results have less deformation artifact than previous work. Weaknesses: I'm not an expert on virtual try-on so I will mainly rely on the other reviewers opinion for a more informative feedback about the weakness of the paper. On feedback I have is that the results are often overly smooth on the garment area. The paper showed a lot of results of clothing with stripe pattern and the stripes are often very smooth, and not reflecting the pose and shape of human body very well.This is perhaps due to the way that the deformation is learned. It would be great to discuss the potential cause of this over-smoothness and also perform ablations studies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the aforementioned questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is discussed in section 4 and seems sufficient to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Thank you for your diligent efforts and valuable suggestions in the peer review process. --- ### Strengths: **S-A:** Thanks for your comments. There is a significant difference between our main idea and the disentangled cycle consistency approach [A]. The differences are summarized as follows: 1) [A] uses dual-network during training, but only uses one network for inference, leading to convergence challenges and increased computational overhead. In contrast, our method achieves efficient convergence with a single shared-weight network, requiring fewer parameters and FLOPs. 2) [A] relies on human parsing as input during both training and inference, making the results sensitive to incorrect parsing. Our approach only takes garment and person images as input, thus avoiding above issue. 3) [A] employs disentangled cycle consistency to segment clothing and skin, which may cause artifacts at boundaries after their synthesis. Our method directly uses the whole person image as input, thus avoiding this issue. 4) [A] introduces a multi-encoder network for feature extraction, adding complexity. Our approach can utilize any generator as our training network, achieving results surpassing [A]. 5) [A] uses a globally deformable STN network for clothing deformation. We introduce an MRF-based clothing deformator achieving SOTA performance. 6) Our method outperforms [A] in terms of realism, algorithmic complexity, model size, FPS, and generalization. - In summary, our approach possesses significant advantages and impact to offer a novel solution for virtual try-on tasks. ### Weaknesses: - **W-A1:** Thanks for your comments. In fact, this task is based on deep learning for image generation, thus sharing commonality with other image generation tasks. In comparison to previous virtual try-on literature, our paper provides comprehensive qualitative and quantitative experiments, comparing against state-of-the-art methods [2,3,4,6,7,8,9,13,27,28,31], to demonstrate the effectiveness of the proposed framework. The novelty of our approach is outlined in the introduction and related work sections. Moreover, we have adequately addressed and supplemented relevant experiments raised by other reviewers. Therefore, we sincerely appreciate your impartial and responsible evaluation. We have conducted experiments on selected state-of-the-art literature to showcase the comprehensiveness of our experiments, and the results are as follows. |Methods|Publication|Number of Baselines|Qualitative / Quantitative Results on Clothing Deformation|High-Resolution Dataset VTON-HD| |:--:|:--:|:--|:--:|:--:| |[2]|CVPR 2021|5 ( [ 3, 4, 8, 15, 27] ) |×/×|√| |[6]|CVPR 2021|5 ( [ 3, 4, 9, 11, 27] ) |×/×|√| |[28]|ECCV 2022|6 ( [ 3, 4, 6, 9, 11, 31] ) |×/×|×| |[7]|CVPR 2022|8 ( [2, 3, 4, 6, 8, 9, 27, 31] )|×/×|×| |[13]|CVPR 2022|3 ( [27, 4, 2] ) |√/×|×| |Ours|This Work|10 |Added/Added|Added| - As can be observed, the experiments conducted for our method are thorough and comprehensive. --- - **W-A2:** Sorry for your confusion. Firstly, the selection of a substantial number of striped garments in the experiments was done to highlight the detailed and efficient performance of the newly proposed clothing deformation algorithm, MRF, in clothing deformation. This is because choosing clothing with large solid color blocks and no patterns would not showcase the significant pixel displacement caused by excessive deformation, whereas striped patterns allow for the clear visualization of the direction of lines when subjected to extensive deformation. - Regarding the challenge of dealing with overly smooth clothing regions, this is indeed a rather tricky aspect of virtual try-on tasks. Through preliminary experimental validation, we observed that **the perceptual loss in the overall loss function directly affects the smoothness of the clothing.** When the smoothness is excessive, there are fewer wrinkles in the clothing; conversely, there are more wrinkles when it's less smooth. - To validate this hypothesis, we conducted ablation experiments regarding the perceptual loss and $L_1$ loss. We controlled the hyperparameters of both to adjust their significance in the overall loss. The results of the ablation experiments are presented in Table 1 and Figure 8.2 (https://github.com/anony-conf/results-USC-PFN). --- |$\lambda_1$|$\lambda_p$|FID ↓| |:--:|:--:|:--:| |0|0|10.66| |1|0|17.45| |0|1|10.76| |1|1|**10.57**| |1|10|11.98| |10|1|11.46| --- - It can be observed that the network converges well only when the perceptual loss is present. When $\lambda_1=1$ and $\lambda_p=1$, meaning both are not artificially controlled, the convergence is optimal. Furthermore, from the fourth column of Figure 8.2, it can be seen that the wrinkles on the clothes are most realistic when both losses are utilized, while the second column without the perceptual loss exhibits an excessively smooth appearance. Thus, we have preliminary evidence that overly smooth clothing is a result of the perceptual loss. We can infer that the introduction of the adversarial loss would lead to more realistic wrinkles in the clothing. - However, overall, the appearance of wrinkles results in localized darkened pixels. If not controlled properly, these darkened regions might appear in unexpected areas of clothing, thereby obscuring the original details of the clothing. As a result, current virtual try-on methods primarily focus on the authenticity of clothing deformation first, and then address finer details like wrinkles. --- > We sincerely thanks for your dedicated efforts. We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. If you have more questions or encounter broken links, please comment, and we'll assist you quickly.
Rebuttal 1: Rebuttal: > We sincerely appreciate the diligent efforts of the reviewers. We propose a novel parser-free self-cycle consistency framework, USC-PFN. To validate the effectiveness, robustness, and generalization of this architecture, we have added the following supplementary experiments: 1) Qualitative experiments of the clothing deformer NGD based on the VITON dataset. 2) Quantitative experiments of the clothing deformer NGD based on the VITON dataset. 3) Qualitative experiments of the clothing deformer NGD based on the high-definition **VITON-HD** dataset. 4) Quantitative experiments of the clothing deformer NGD based on the high-definition **VITON-HD** dataset. 5) Quantitative experiments of full network based on the augmented VITON dataset. 6) Qualitative experiments of full network on the high-definition **VITON-HD** dataset. 7) Quantitative experiments of full network on the high-definition **VITON-HD** dataset. 8) Computational complexity analysis of full network on the high-definition **VITON-HD** dataset. 9) Several other relevant ablation experiments. - The tables of all the added quantitative experiments are presented below: --- ### Quantitative experiments of the clothing deformer NGD based on the VITON dataset. |Methods|Publication|Warping|FID-P $\downarrow$|KID-P $\downarrow$|FID-UP $\downarrow$|KID-UP $\downarrow$| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| CP-VTON [3]|ECCV 2018|TPS|43.95|2.233|42.13|2.112| ACGPN [4]|CVPR 2020|TPS|42.10|2.009|41.48|2.048| DCTON [2]|CVPR 2021|TPS|42.80|2.170|42.19|2.126| PFAFN [6]|CVPR 2021|AF|22.81|0.785|23.90|0.860| SGAFN [7]|CVPR 2022|AF|20.07|0.552|20.38|0.481| **Ours** (MRF)|**This Work**|**MRF**|**18.70**|**0.390**|**19.50**|**0.355**| --- ### Quantitative experiments of the clothing deformer NGD based on the high-definition VITON-HD dataset. |Methods|Publication|Warping|FID-P $\downarrow$|KID-P $\downarrow$|FID-UP $\downarrow$|KID-UP $\downarrow$| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |VITON-HD|CVPR 2021| TPS|32.968|1.407|32.93|1.353| |HR-VITON|ECCV 2022|AF|25.499|0.926|24.826|0.759| |**Ours** (MRF)|**This Work**|**MRF**|**19.060**|**0.418**|**22.861**|**0.504**| --- ### Quantitative experiments of full network based on the augmented VITON dataset. |Methods|SSIM $\uparrow$|FID $\downarrow$| |:--:|:--:|:--:| |ACGPN|0.81|20.75| |Cloth-flow|0.86|13.05| |PF-AFN|0.87|12.19| |**Ours**|**0.90**|**10.46**| --- ### Quantitative experiments of full network on the high-definition VITON-HD dataset. |Methods|Publication|Parsing as input|SSIM $\uparrow$|FID $\downarrow$|KID $\downarrow$| |:--:|:--:|:--:|:--:|:--:|:--:| |CP-VTON|ECCV 2018|Y|0.791|30.25|4.012| |ACGPN|CVPR2020|Y|0.858|14.43|0.587| |DCTON|CVPR2021|Y|0.810|15.55|/| |VITON-HD|CVPR2021|Y|0.843|11.64|0.300| |HR-VITON|ECCV2022|Y|0.878|9.90|0.188| |**Ours**|**This Work**|**N**|**0.899**|**9.10**|**0.159**| --- ### Computational complexity analysis of full network on the high-definition VITON-HD dataset. |Methods|Weight Size|#Params|FLOPs|FPS| |:--:|:--:|:--:|:--:|:--:| |VITON-HD|588M|154M|1689.7G|3.76| |HR-VITON|586M|148M|1555.4G|4.09| |**Ours**|**334M**|**87M**|**467.9G**|**22.19**| --- ### Computational complexity analysis of full network on the VITON dataset. |Methods|#Params|FLOPs|FPS|FID-Clothing $\downarrow$|SSIM-TryOn $\uparrow$| |:--:|:--:|:--:|:--:|:--:|:--:| |ACGPN [4]|139M|206G|10|42.10|0.84| |DCTON [2]|153M|194G|19|42.80|0.83| |PF-AFN [6]|99M|69G|34|22.81|0.89| |**Ours** (All ngf=64)|140M|46G|39|18.70|0.91| |**Ours** (NGD ngf=32)|**87M**|**29G**|**40**|**18.85**|**0.91**| --- > The remaining figures and tables for qualitative results, ablation experiments, and rebuttals related to the experiments are all included in the following link and attachment. --- - https://github.com/anony-conf/results-USC-PFN --- - https://github.com/anony-conf/USC-PFN/ - (The checkpoints will be made publicly available immediately.) --- --- > We sincerely thank you for your dedicated efforts. We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. If you have more questions or encounter broken links, please comment, and we'll assist you quickly. Pdf: /pdf/dc15131cc0953763c2374cc758648d641cdeff4b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present a virtual garment try-on method. A cycle-consistency loss allows for self-supervision, i.e. the method does not require paired data (of the same person wearing different garments) as supervision. Unlike a previous method [2] that also uses cycle consistency, the same network weights are used for both the forward and backward step of the cycle. Additionally, a new garment deformation model based on Markov Random Fields is used instead of previous methods that use one of thin plate splines, appearance flow, or moving least squares. The authors show improved results over almost all previous methods, and similar results to one previous method [6]. Strengths: - Cycle consistency using shared weights makes sense. It might seem like a small change, but does seem to require support from newly introduced losses. - The new deformation model is a contribution, although it is unfortunately not evaluated separately from the full try-on model. - The evaluation is reasonable and results show a significant advantage over the previous cycle-consistent method, and a significant advantage over most other methods (except one). Weaknesses: - The exact differences to [2] that give the proposed method an advantage over [2] are unclear. The authors allude to advantages over [2] in several passages, but never fully describe what the differences to [2] are exactly (apart from re-using the same weights for both directions in the cycle) or where the advantages come from. For example: - In Section 2, the authors mention that [2] prioritizes paired garment-person images which can lead to difficulties with unpaired data. However, the proposed method also uses paired (garment, person) images, so what exactly is the difference? This needs to be clarified. - in Section 3.1, the authors mention a 'deconstruction of the human body' that leads to some problems in [2]. It is unclear what the authors refer to here. Do the authors refer to the Densepose descriptor used in [2]? If so, the authors would first have to mention that [2] needs to use the Densepose descriptor, so the reader has enough context to understand this statement. Just looking at the Eq 5 and 6, the only difference between the proposed method and [2] seems to be that the same weights are re-used for both directions. If there are any other differences, these need to be described explicitly near Eq 5 before referring to them. This needs to be clarified. - Also in Section 3.1, the authors mention that previous methods (including [2]) need an 'a-priori label'. Again, the authors need to describe what this refers to, otherwise the differences to [2] are hard to understand. - The existing method [6] seems to have a similar performance. A discussion of other advantages over [6] would be good. - FID is better for [6] while SSIM is better for the proposed method. - PF-AFN look a bit more natural to me in all examples of Figure 4 except the first row, and maybe some details in the fourth row. It looks more natural to me because it seems to better follow the shape of the body, the proposed method looks flatter, as if it does not fully follow the body shape. - The evaluation could be improved with additional ablations: - Using SIG with the same deformation method that [2] uses, to have a more direct comparison of using cycle consistency with shared vs. non-shared weights in the forward/backward steps. - Comparing the performance of NGD to previous deformation methods separately from the full try-on pipeline. - The exposition is often confusing and should be improved. - The introduction emphasizes that the method only uses unpaired data, but it seems paired (garment A, person wearing A) data was still used. The introduction should clarify which kinds of pairs were not used as training. I assume the author refer to (person wearing A, person wearing B) pairs, but this needs to be clarified in the introduction. - The description of the deformation fields in Eq. 1 and the paragraph before it is a bit confusing. $\hat{f}$ is described as the optimal deformation field, but in the equation it is used as the variable that is being optimized (I would have expected $\hat{f}$ to be the result of the minimization in Eq. 1, rather than the variable being minimized over). - At the end of Section 3, 'infinitely close' is probably not the right phrasing, since i) its unclear what 'infinitely close' means (why not say 'the same'?) and ii) the goal is to reconstruct $p$ with $\hat{p}$, but in practice, they will not be the same. - $R_{sm}$ is not defined in Eq. 2. The type of smoothing term should at least be described shortly. - A bit more information should be given in the weights $w_\phi$ in Eq. 2. How is the Gaussian constructed? How are its mean and variance computed? I assume something like the centroid and variance of pixel coordinates in the cloth region. A brief mention would be good. - In Eq. 2 and 3 it would be good to denote which variables the minimization is over (i.e. $\text{argmi}n_{\hat{f}}$ or $\text{argmi}n_{\phi}$. - In Eq. 3, it should probably be "$\dots\text{with} \dots$" instead of "$\dots s.t. \dots$" - Above Eq. 7 the authors mention that additional supervision is introduced for the upper body, but do not mention how it is used, what it consists of, etc. If this is described later on, it would be better to not mention this yet, to avoid confusion, or to re-structure the sections so the reader does not need to know information that will only be provided later on to understand the current text. - In Eq. 7 the notation is confusing. I did not follow why every variable has an additional ' in the notation. Why not use the same input notation as before: $(g', p)$, and output is $\hat{g}'$? - In Section 3.2, some design choices should be discussed: - Why is $\mathcal{G}_\mathcal{A}$ not used directly instead of $\mathcal{G}_\theta$? (I assume because the authors want to avoid using Densepose in the cycle training, but it should be discussed why, and ideally an ablation should be given.) - Why does $\mathcal{G}_\theta$ need to be pre-trained instead of directly starting training with $\mathcal{G}_\mathcal{A}$ and then finetuning? - Eq. 8 is unclear, it would be clearer to explicitly describe the loss used to train $\mathcal{G}_\theta$ in the second phase, probably something like $\min_\theta \mathcal{G}_\theta(g', p) - \mathcal{G}_\mathcal{A}(g', p_d, p_h)$ - In Section 3.3, the definition of $L_{sec}$ should be given, or at least a short description of what this loss does. - In Eq. 9, $\mathcal{L}_{per}$ is not defined, it is only defined later on in the next subsection. - Above Eq. 15, step 1 and step 2 are not defined. These probably refer to the forward and backward steps in the cycle, but they should be defined more explicitly. - The loss in Eq. 15 needs a bit more motivation. why is backpropagation disabled if $\hat{p}'$ is more similar to $p$ than $\hat{p}$? Could this not happen quite frequently at the start of the training? Some discussion is needed. - A bit more information is needed how SSIM is computed. Is it computed by using the garment as input that the input person is already wearing, and is a held-out test set of paired data used? - In the ablation study when removing the MRF module, what deformation method is used instead? Details: - Near line 80: self-consistency consistency -> self-cycle consistency - In Section 3.2, the paragraph titles should give the acronyms of the two modules in paranthesis (NGS, SIG), since the acronyms are used later on in the text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Please clarify the setup that was used to compute SSIM. - Please clarify all differences that [2] that could cause the improved performance shown in the results. - Please describe all advantages of the proposed method over [6], and what you consider to be the most important advantages. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations have been discussed. Impacts from potentially biased datasets could be mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Thanks for your diligent efforts and valuable suggestions. We have provided both quantitative and qualitative results of our garment deformer on VITON and a high-definition VITON-HD (512×384) datasets (see https://github.com/anony-conf/results-USC-PFN). --- ### Weaknesses: - **W-A1:** The differences with [2] are: 1) [2] uses dual-network but only one network for inference, leading to convergence challenges and increased computational overhead. Our method achieves efficient convergence with a single shared-weight network, requiring fewer parameters and FLOPs. 2) [2] relies on human parsing as input during both training and inference, making the results sensitive to incorrect parsing. Our results only takes garment and person images as input, thus avoiding this issue. 3) [2] segments clothing and skin, which may cause artifacts at boundaries after their synthesis. 4) [2] introduces a multi-encoder network for feature extraction, adding complexity. Our method can utilize any generator as our training network, achieving results surpassing [2]. 5) [2] uses a globally deformable STN for clothing deformation. We introduce an MRF-based deformator achieving SOTA performance. 6) Our method has more advanced performance than [6]. In summary, our approach offers a novel solution for virtual try-on tasks. --- - **A1-1:** Sorry for your confusion. In Sec. 2, the "self-cycle training" is our own architecture, not [2]. [2] itself refers to its approach as 'disentangled cycle-consistency.'" Moreover, both our method and [2] utilize paired images. --- - **A1-2:** In Sec. 3.1, 'deconstruction of the human body' refers to [2] introduced a parsing network to generate skin and clothing separately. Incorrect parsing can lead to erroneous try-on results. Our method only take clothing and person images as inputs, thus avoiding the issues. --- - **A1-3:** The term 'a-priori label' refers to the use of the Densepose or human parsing as input. Our network completely eliminates any such a-priori labels from the input. --- - **W-A2-1:** Thanks for your comments. FID is not universally suitable for [6], as there are some erroneous images in datasets, which might lead to slight disadvantages in FID. We utilized a publicly available augmented VITON dataset for the purpose of generalization validation. The table below demonstrates that our FID significantly outperforms PF-AFN. |Methods|SSIM ↑|FID ↓| |:--:|:--:|:--:| |ACGPN|0.81|20.75| |Cloth-flow|0.86|13.05| |PF-AFN|0.87|12.19| |**Ours**|**0.90**|**10.48**| --- - **W-A2-2:** In Fig. 4, PF-AFN exhibits excessive deformation of the lines on the abdomen and arms, which is highly unnatural and indicative of clothing deformation failure. Our results adhere to the body shape, especially in the arm regions. Some instances appearing flattened are due to our suppression of excessive folding during training, as an abundance of folds could potentially obscure clothing details. --- - **W-A3-1:** We replaced NGD with the STN from [2] to generate try-on results. The qualitative and quantitative results are shown in Table below and Fig. 8.1 (https://github.com/anony-conf/results-USC-PFN). |Methods|SSIM ↑|FID ↓|KID ↓| |:--:|:--:|:--:|:--:| |DCTON [2]|0.83|16.32|0.915| |**Ours**|**0.89**|**10.29**|**0.229**| --- - **W-A3-2:** We have conducted qualitative and quantitative experiments for NGD on both VITON (Table 1) and the high-definition VITON-HD datasets (Table 2). See Table below and Fig. 8.1. |Methods|Pub.|Warping|FID-P ↓|KID-P ↓|FID-UP ↓|KID-UP ↓| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| ACGPN|CVPR 2020|TPS|42.10|2.009|41.48|2.048| DCTON|CVPR 2021|TPS|42.80|2.170|42.19|2.126| PFAFN|CVPR 2021|AF|22.81|0.785|23.90|0.860| SGAFN|CVPR 2022|AF|20.07|0.552|20.38|0.481| **Ours** (NGD)|/|**MRF**|**18.70**|**0.390**|**19.50**|**0.355**| --- |Methods|Pub.|Warping|FID-P ↓|KID-P ↓|FID-UP ↓|KID-UP ↓| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |VITON-HD|CVPR 2021| TPS|32.968|1.407|32.93|1.353| |HR-VITON|ECCV 2022|AF|25.499|0.926|24.826|0.759| |**Ours** (NGD)|/|**MRF**|**19.060**|**0.418**|**22.861**|**0.504**| --- - **W-A4:** Thanks for your professional and meticulous comments, which are crucial for enhancing the quality of our paper. We have made major revisions based on these suggestions. However, due to the limitations of the rebuttal space, we regret that we are unable to write down the answer to each of your concerns raised below individually. Once again, we sincerely appreciate your diligent efforts. --- ### Questions: - **Q-A1:** The setup that was used to compute SSIM has already been explained on the open-source webpage (https://github.com/anony-conf/USC-PFN/). --- - **Q-A2:** All differences between our work and [2] have been summarized in the aforementioned **W-A1**. --- - **Q-A3:** 1) [6] relies on prior knowledge to guide the student model, which may involve irresponsible teacher knowledge. Our approach does not employ a similar structure; instead, it self-guides for convergence. 2) [6] is unable to mitigate the impact of errors in the teacher knowledge, whereas we have avoided this aspect. 3) [6] designs a complex structured deformer, while we have the flexibility to adopt any generator. - The most significant advantage is that [6] requires meticulously designed and unique complex clothing deformer, whereas we can employ any network to serve the architecture, resulting in more realistic deformation effects. In try-on synthesis stage, [6] relies on a parser-based prior model, while we rely solely on our own approach. Hence, our method offers a completely novel approach and architecture for the virtual try-on task. --- ### Limitations: - **A:** We conducted experiments on VITON-HD and augmented VITON datasets to demonstrate the generalization and robustness of our method, unaffected by potential biased dataset influences. --- > We hope that our responses have provided you with new insights into our work. We look forward to you giving us a chance by increasing your approval of our paper. --- Rebuttal Comment 1.1: Title: Thanks for clarifications and additional experiments Comment: Thanks for the thorough answers and clarifications! The additional experiments fill in some blind spots in the evaluation, especially showing a clearer advantage over [6] and showing the performance of SIG and NGD separate from each other. Assuming that the authors will add the clarifications and additional experiments to the final version of the paper, I will raise my score by one point.
null
null
null
null
null
null
Uncovering the Hidden Dynamics of Video Self-supervised Learning under Distribution Shifts
Accept (spotlight)
Summary: The authors studied the behavior of six popular self-supervised methods in response to various forms of natural distribution shift. And the study uncovers a series of interesting findings and behaviors of video self-supervised learning (VSSL) methods. The experiments and results are beneficial for the video representation learning community. Strengths: 1. Extensive experiments are conducted. And sufficient experimental data and analysis are provided. It seems to the reviewer that all the conclusions are grounded. Weaknesses: 1. It seems that the effects of data size and model size are not discussed. 2. Many experiments are conducted and some conclusions are obtained. But no improved VSSL are provided, which might be more interesting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Any suggestions for the future research to design better DSSL method? 2. Providing a table of summarisation of the 6 methods might be helpful for the readers. There are many experiments and data. 3. The methods are pre-trained on K400 and K700 datasets, which might not be large enough. If the models are pre-trained on a much larger dataset, will the conclusions in this paper be changed? 4. Does the model sizes effect the conclusions in this paper ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been discussed in the text. And no obvious negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time and providing such valuable feedback. We are happy to note that the reviewer finds our arguments well-grounded with the experimental results. > Effect of data and model size - In **Table S6 (Appendix D)**, we notice that using a larger dataset of more diverse videos generally helps in better performance in both InD and OoD, e.g., overall linear evaluation performance is improved by 1.5% and 0.9% in InD and OoD. Amongst the VSSL methods, v-BYOL benefits the most from the availability of diverse pretraining data, e.g., the overall performance of v-BYOL is improved by 3.1% and 1.9% in InD and OoD respectively. Please kindly refer to Appendix D1 where we discuss the effect of using larger data. - We acknowledge that the impact of model sizes would be another area for exploration, which we have mentioned in Section 7 under “Limitations”. While we could not perform experiments on a larger network due to resource constraints, we have now explored a different architecture, video ResNet50 (please see **Table R1 in the attached pdf**), which shows consistent trends with respect to the ViT backbones used in our paper. Given this consistency when changing the backbone, we anticipate similar observations should the model sizes also be reasonably changed. But we agree that further experiments are required to substantiate this. > No improved VSSL are provided; Suggestions for future research We thank the reviewer for this question. As our overall goal in this work is to study the robustness of *existing* VSSL models under distribution shifts, introducing a new method is beyond the scope of this paper. We do believe, however, that our work can be used to drive the design of future VSSL methods with improved performance. Following, we share some of our thoughts that can be used as a suggestion for future work, we will also include this in the final version. A general guideline could be to train video SSL frameworks to learn *local time-invariant representations* and *global time-variant representations*. Our intuition is that such models would be - robust against viewpoint shifts as it learns *view-invariance* through local time-invariant representations, similar to v-MoCo and v-SimCLR - robust against mere temporal perturbations as they learn locally time-invariant representations, similar to v-BYOL and v-DINO - robust to context shifts as it understands the global temporal dynamics well, similar to v-MAE. The objective function can be designed as a combination of masked reconstruction and contrastive/non-contrastive methods. We will further investigate such approaches in future. > Summary table We thank the reviewer for sharing this great idea. Please see **Figure R1 (attached pdf)**, we will also add it in the final version. > K400 and K700 datasets might not be large enough; If the models are pre-trained on a much larger dataset, will the conclusions in this paper be changed? - Kinetics700 (K700) consists of 0.5 million videos and is one of the widely used large-scale **open-source** pretraining benchmarks for video self-supervised learning [1,3,72]. - Amongst the other open-source video datasets, a potential alternative is AudioSet 2M, which is popularly used in audiovisual self-supervised learning. We internally experiment with AudioSet as well and the results are added to **Table R3 (attached pdf)**. We find that AudioSet also shows a similar trend to our findings from K700, confirming that our findings are likely to be aligned even when pretrained on other large datasets. - Despite AudioSet being larger than Kinetics700, we do not notice a clear benefit as shown in **Table R3 (attached pdf)**. This is likely since the action classes present in Kinetics are more aligned with the downstream actions compared to AudioSet. - We note a few prior works that used even larger datasets like IG65M [98] which could have been beneficial in our study; however, they are not open-sourced, hence we could not use them. References: - [1] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training, Z Tong et al., NeurIPS 2022 - [3] Spatiotemporal Contrastive Video Representation Learning, R Qian et al., CVPR 2021 - [72] A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning, C Feichtenhofer et al., CVPR 2021 - [98] Large-scale weakly-supervised pre-training for video action recognition, D Ghadiyaram et al., CVPR 2019 --- Rebuttal Comment 1.1: Title: Reviewer E4vF Comment: The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response.
Summary: The paper studies the generalizability of many self-supervised training approaches in the video domain. The paper uses 17 tasks to completely test different aspects of the models, including view-point change, temporal modeling, and open set generalizability. Strengths: 1. Most of the cutting-edge video pre-training approaches are covered. 2. Generalizability is an important problem for current large foundation models. Weaknesses: 1. It's good to have a hyper-parameter table in the appendix. However, I am not sure if the authors have swept hyper-parameters for each method, and make sure each pre-training is complete. I understand the training for each approach is expensive, but the value will be reduced if the pre-training recipe hasn't been fully explored. 2. Given the paper studies the pros and cons of each approach, I would expect there will be a proposed approach that performs better in most cases or a guideline that can help readers to develop such a method. 3. Now the conclusion of each question scatters in different sections. This is fine but I think having a summary table to compare all approaches and show the findings at the beginning or at the end could help readers to understand the pros and cons of each approach better. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How is the v-supervised trained? Does it use kinetics' labeled data for pre-training? 2. In lines 194 - 198 and Table 1, I didn't see the OoD performance of v-supervised and v-MAE as better than the others if they learn better time-variant information. Most of the time Table 1 shows mixed results in linear probing and fine-tuning (e.g. v-MAE performs better in OoD with FT but worse in Lin.). 3. Maybe this paper can consider citing [1, 2] in lines 219 - 222. 4. In Table 5, why the trend of fine-tuning and linear probing is very different? [1] Lei, Jie, Tamara L. Berg and Mohit Bansal. “Revealing Single Frame Bias for Video-and-Language Learning.” [2] Buch, S., Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei and Juan Carlos Niebles. “Revisiting the “Video” in Video-Language Understanding.” --- **Post-rebuttal** Thank you for the authors' response. I have read it and it addressed my questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation is adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for sharing such thoughtful reviews. We are happy to find the overall positive feedback provided by the reviewer. > It's good to have a hyperparameter table in the appendix; if the authors have swept hyperparameters for each method? We would like to point out that we have indeed swept a broad range of parameters for each method to ensure that the hyperparameters are thoroughly tuned for each method. Below we provide a description for some of the key hyperparameter setups: - **Augmentation**: The optimal setup for augmentation varies amongst contrastive, non-contrastive, and masked autoencoder. Based on empirical findings, cropping ratios of [0.08, 1], [0.2, 0.766], and [0.5, 1] are used for contrastive, non-contrastive, and v-MAE respectively. Additionally, color-jittering, blur, or horizontal flip are not applied for v-MAE, but they are applied to other VSSL methods. - **Epoch**: Following [1, 6], we pretrain these methods *up to* 800 epochs and track their performance using InD validation sets (UCF101, Kinetics400) and notice that all the methods reach their optimal somewhere between 600 to 800 epochs. We use the best checkpoints of each method in the downstream tasks. - **Learning Rate (LR)**: We individually tune LR for all the methods through a grid search between 5e-5 to 1e-3. We find the optimal LR for v-MoCo, v-BYOL, and v-DINO as 3e-4, v-SimCLR as 2e-4, and v-SimSiam as 1e-4. Additionally, the predictor heads are trained with 10 times higher LR with respect to the base LR, to achieve optimal performance. - **Projector head**: The optimal configuration for the projector head also varies amongst the VSSL methods, e.g., an MLP head of 4 layers is optimal for v-SimCLR and v-SimSiam, while an MLP head of 3 layers works best for v-MoCo, v-BYOL, and v-DINO. - **Predictor head**: the configuration of the predictor head is also adjusted, i.e., v-MoCo and v-SimSiam work best with a predictor head of just 1 layer while a 2-layer predictor head works best for v-BYOL. - **Others**: we also tune other hyperparameters like **weight-decay**, and **EMA** coefficient, among others to find the optimal configuration for each method. We present the hyperparameters related to each method in **Tables S1 and S3** in **Appendix B**. > Guidelines for future work We thank the reviewer for this question. Following, we share some of our thoughts that can be used for future work. We will also include this in the final version of the paper. A general guideline could be to train video SSL frameworks to learn *local time-invariant representations* and *global time-variant representations*. Our intuition is that such models would be - robust against viewpoint shifts as it learns *view-invariance* through local time-invariant representations, similar to v-MoCo and v-SimCLR - robust against mere temporal perturbations as they learn locally time-invariant representations, similar to v-BYOL, v-DINO - robust to context shifts as it understands the global temporal dynamics well, similar to v-MAE. The objective function can be designed as a combination of masked reconstruction and contrastive/non-contrastive methods. We will further investigate such approaches in future. > Summary table to compare all approaches We thank the reviewer for sharing this great idea. Please see **Figure R1 (attached pdf)**, we will also add it in the final version. > How is the v-Supervised trained? Does it use kinetics' labeled data for pre-training? Yes, the v-Supervised model is pretrained using the labels of the Kinetics dataset. We follow a similar recipe to [76] ViViT: A Video Vision Transformer. > In Table 1, v-MAE and v-Supervised show mixed results; v-MAE performs better in OoD with FT. but worse in Lin. - Yes, the reviewer’s observation is correct. However, our key takeaway from Table 1 is that *v-MAE consistently outperforms in both OoD setups when finetuned and v-Supervised shows the best performance in 3 out of 4 OoD setups*. It should be noted that the poor performance of v-MAE in Lin. is not specific to context shift, v-MAE shows poor Lin. performance in almost all the setups. Therefore, we conjecture v-MAE and v-Supervised are strong temporal learners based on their overall superior performance. - However, as the trend in real-world evaluation is noisy as correctly noted by the reviewer, we carry out tests in a controlled setup on a toy dataset. In particular, we aim to disentangle the spatial and temporal representations to accurately evaluate who learns the temporal dynamics better irrespective of their spatial representation learning capability. As presented in Figure 2a, both v-MAE and v-Supervised show superiority over other methods by a very large margin, confirming that they learn better temporal dynamics. > Suggested reference We thank the reviewer for suggesting these refs, we will add them in the final version. They are indeed relevant to our study, - the work by Lei et. al. is related to context shift as it studies static appearance bias; - Buch et. al. introduces atemporal probe model which is relevant to our work, as we also discuss the ability of the video models in understanding temporal dynamics. > In Table 5, why is the trend of fine-tuning and linear probing very different? The superiority of v-SimCLR and v-MoCo in open-set recognition when finetuned may be attributed to their better generalizability, which, while advantageous in linear probing, becomes a pitfall in open-set scenarios. These highly generalizable models, driven by their overconfidence, often misinterpret unknowns as known classes, leading to incorrect predictions. Conversely, *weak frozen encoders* avoid such misclassifications due to their limited generalizability and perform better in open-set conditions, as observed in models like v-DINO and v-Supervised. Interestingly, such a trade-off is only noticed in linear evaluation and not in finetuning. --- Rebuttal Comment 1.1: Title: Reviewer jxy6 Comment: The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response.
Summary: The paper proposes a set of benchmarks to assess different robustness properties of video representation learning models, including contrastive, non-contrastive, and generative models. The paper trains multiple of these models using a common training protocol and reports multiple empirical findings on different forms of distribution shift. Strengths: The paper outlines a very clear training and evaluation protocol and nicely outlines the results. I like the "highlights" section at the end of each addressed question section (up to comments below). The considered datasets and models are extensive. Benchmark and models could become a very valuable resource for further robustness benchmarking on video datasets. Weaknesses: - One major weakness of the paper seems to be the missing control for the baseline performance of the proposed methods. For instance, in Table 2, several claims are made with respect to the performance of contrastive vs. other models under viewpoint shifts (l. 246 etc). However, in Table 2 it is unclear whether these improvements are originating from the improved OoD, or in fact the difference in InD performance. Given that the authors computed error bars for all results, I propose to equip every statement in the paper with a suitable hypothesis test capable to test the influence of the different factors. - Another major weakness is the missing link to published results. All models in the paper are trained from scratch by the authors, and no numbers are reported by applying existing model checkpoints from the literature and confirming that the trained models reach a comparable performance level. What is the rationale for not verifying model performance e.g. against the released best models trained on image datasets? Other weaknesses: - In the training protocol, parameters like the batch size were kept constant across methods. However, it is known that methods like SimCLR depend on availability of large batch sizes, while models like MoCo have mechanisms build in to more effectively leverage data in small batchsize training setups. Hence, I disagree with the author's statement that the considered setting is "fair". Happy to discuss the rationale for these choices (vs. for example using the best available model configurations). Minimally, it would be good to discuss this point more in the paper, e.g. in the limitation section. - For a purely empirical paper, I would recommend to back up the fairly general statements at the end of each section with statistical tests. - The plots are unreadable without heavily zooming in, e.g. Figure 4 and Figure 6. Figure 5 is borderline. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - If I read the methods correctly, you trained all models from scratch on the respective datasets (Kinetics400 and -700). Did the video models you trained readily outperform the best "static image" models available for the respective methods? - Will you open source the model checkpoints and source code for all models you trained in this study? The paper only makes a statement regarding the code. - The results in Table 2, Viewpoint (ego) seem to be very bad (around 11%). Did you investigate possible causes for this performance drop? How do you justify to infer conclusions from this part of the table (e.g. in l. 246 etc) given the bad InD performance to begin with? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations have been addressed (up to the additional comments I made above). However, given the amount of datasets used in the paper, I am missing license and copyright statements for these datasets, e.g. in the appendix, that go beyond the citations provided in section 5 in the main paper. This could e.g. be included into Appendix C. If the benchmark is intended for later release as outlined, I think it would be very useful to have such an overview directly in the paper for future reference by users of the benchmark. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing such valuable feedback and a thorough review. We are glad to note that the reviewer finds the 'Highlights' at the end of each section useful and finds our work extensive and valuable. > Do the improvements originate from the improved OoD or from the difference in InD performance, e.g., Table 2? We thank the reviewer for raising the question. In **Table R4 (attached pdf)**, we perform statistical tests (Pearson corr.) investigating if the improvements in OoD are due to the *OoD robustness* or from improved InD performance. - Overall, InD vs. OoD performance in linear eval. show a higher corr. compared to when finetuned, i.e., 0.71 vs. 0.55. - In the case of viewpoint shift (in Table 2), we notice a strong corr. in *linear eval. of egocentric view* and *finetuned eval. of surveillance camera view*. In all the other viewpoint shift setups, the corr. is fairly low. We further study the relative OoD robustness (i.e., measuring the performance drop w.r.t InD) to compensate the effect of varying InD performance using the corr. coefficient between OoD and InD performance, and find that our overall conclusions regarding models' robustness are still valid. We note that this is not a perfect solution to compensate the effect but nevertheless, the best we could think of. We'd be happy to try any other suggestions the reviewer might have in mind. > Confirming that the VSSL implementations reach a comparable performance level to prior works In **Table R5 (attached pdf)**, we compare the performance of v-SimCLR with prior works which are also based on (or inspired by) SimCLR, confirming that our implementation achieves comparable performance w.r.t the prior works. Please note that while these methods use a similar objective function to SimCLR, their implementations largely vary amongst each other. Considering such variations amongst the prior works, we choose to implement and train all the VSSL methods by ourselves and pretrain them in identical experiment setups with necessary hyperparameter tuning for a fair comparison. > What is the rationale for not using models trained on image datasets? Did the video models you trained readily outperform the best "static image" models available for the respective methods? In **Table R2 (attached pdf)**, we compare image vs. video pretraining based on 3 SSL methods (MoCo-v3, DINO, MAE) on a variety of OoD setups. We particularly choose these 3 methods as they are originally proposed with ViT similar to our setup. The results presented in Table R3 exhibit up to 9.8% improvements when using video models compared to their image variants. > Batch size is kept constant; SimCLR works better w/ large batch but MoCo works better w/ small batch. - We follow a similar *video* pretraining setup of [72] and use the same batch size for all the variants. - We would also like to clarify that our v-MoCo is based on MoCo-v3 [23] (not *MoCo by K. He et al., CVPR'20*), and as discussed in [23] MoCo-v3 performs best with a similar batch size (2048) to SimCLR when pretrained with unlabelled *images* from ImageNet. - From [3] we observe that the performance of *video contrastive methods* shows a very stable performance when using a batch size between 512 to 1024, and below or above that range performance degrades. We believe the batch size of 768 is not a detrimental factor here as it is within a standard range and the LR is adjusted accordingly. > Hypothesis and statistical test to back up the summary statements at the end of each section The Highlights mentioned in section 6 are a summary of the key findings based on our empirical study. To ensure our observations are statistically significant, we run these experiments 3 times with different seeds and report the average and standard deviation. Additionally, to strengthen our arguments observed from the evaluation on the real-world datasets, we also conduct a series of toy experiments in a controlled setup to further verify some of these hypotheses. > The plots are unreadable without zooming. We thank the reviewer for pointing it out. We will enlarge the size of the figures in the final version as it allows for an additional page. > Will you open source the model checkpoints and source code? Yes, we will open-source the model checkpoints and source codes upon publication. > Possible cause for poor results in egocentric viewpoint shift experiments and additional justifications This is likely considering the challenging nature of the Charades-Ego dataset, which is comprised of videos from 157 fine-grained household activities, moreover, each video contains multiple labels. However, we would like to highlight that our results are in a similar range to prior works, e.g., [36] reports InD mAP of 23.3 vs. ours 21.4, and OoD mAP of 19.5 vs. ours 16.1. The 2-3% drop is likely since [36] uses RGB *+ optical flow*, whereas our method only uses RGB frames. As for the linear eval. performance being around 11%, we would like to point out that to our knowledge, no prior works have performed linear eval. on this dataset. > How to make conclusions about OoD when InD performance is not strong, e.g., Table 2 Viewpoint (ego.)? We acknowledge that in such a case, a strong claim about OoD could not be made. However, our conclusion about viewpoint shift is not only based on the performance on egocentric viewpoint shift but rather based on a trend noticed across all 3 viewpoint shifts in both linear and finetuning. Moreover, our claims are also validated through toy experiments for additional confirmation. > Dataset license We thank the reviewer for this suggestion. Following, we provide the license statements for each dataset and will also add this in Appendix C. - CharadesEgo, MiT-v2: License for Non-Commercial Use - Kinetics, HMDB51, ToyBox: CC BY 4.0 - Mimetics, UCF101, TinyVirat-v2, COIL100, STL-10: Open access - ActorShift, Sims4Action: MIT - RareAct: Apache --- Rebuttal Comment 1.1: Title: Reviewer D3xd Comment: The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. --- Rebuttal Comment 1.2: Title: Re: Rebuttal Comment: Thank you for the comprehensive rebuttal. I especially appreciate the efforts to perform experiments comparing image and video models, as well as the statistical analysis performed. I wanted to follow up on a few points: - **Statistical analysis**, I have follow up questions on Table R4: - Could you provide the details on how you compute the correlation coefficients, i.e. for which table in the paper? It would help to read a few more details on the analysis (e.g., like you would also put it into the methods/supplement of the paper), like which model was employed, which test was performed, etc. In general, thanks for going in this direction, I think this will greatly strenghten the claims in the paper. The way how I read it right now is that you find a correlation between IID and OOD performance across all methods, which was one of my concerns --- I would like to better understand the effect of model (e.g. contrastive vs. non-contrastive) vs. the confounder that the IID performances between the models vary. (maybe I am also not fully understanding your analysis, hence, please expand) - To make it more concrete: Could you again make a very clear example how e.g. the statement "contrastive methods (v-SimCLR and v-MOCO) are robust to viewpoint shifts as they consistently achieve better performance in all three setups in both linear and finetuning schemes." (l. 223 in the paper) maps to Table 2 and is supported by your statistical analysis? There are a few more of these kind of strong statements in the paper that are not fully clear to me yet, I might follow up with a few additional examples. - **Source code release**: The additional info to open source model checkpoints and evaluation code are very useful. Regarding the code, is this already in a state ready to release? If so, I think there is an opportunity to send the AC a link to the codebase which they can pass along to me. I would be interested to have a look, if this is well setup it would strengthen the contribution. - **Figure R1**: The color somehow needs to map to a metric/number. I think this overview is great and could e.g. go into the supplement, but you should work on making this quantitative (vs. "low" to "high") I apologize for the late response just before the weekend --- please feel free to post multiple replies as they get ready in case this speeds up the further discussion.
Summary: This works studies various video SSL methods under distribution shifts. 6 different video SSL methods (SiMCLR, MoCO, BYOL, SimSiam, DINO, MAE) are trained on Kinetics-400 under the same experiment settings (e.g. ViT-B, fixed number of epochs, etc) and then evaluated on various distribution shifts (e.g. viewpoint shift, context shift) and report the findings (e.g. contrastive methods more robust to viewpoint shifts, while MAE is a stronger temporal learner). Strengths: 1. Writing and presentation is clear. 2. Same experimental and test bed setup for all the baselines. The test bed and evaluation code could be useful to the community. 3. Useful empirical observations characterizing the strengths and weakness of prominent approaches. Weaknesses: No experiments or analyses on model scaling or different architectures. Some findings are pretty much expected (e.g. contrastive methods not being good temporal learners happens by design), but it's nice have an easy test bed to quantify them. Some of the settings controlled for can be strong caveats, e.g. fixing the number of epochs can bias evaluations towards methods that converge faster. It is unclear how much of the results hold when models used their respective optimal settings - which is an important setting for evaluations. Some of the differences could stem not from core differences in pretext task, but in use of a teacher. e.g. MAE can also use an EMA teacher and the representations would converge much faster. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors please discuss some of the limitations and caveats to their findings (e.g. some are listed above)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing such valuable feedback. We are happy to note that the reviewer finds our work useful to the community. > Experiments on different architectures or model scaling We have now conducted experiments on video ResNet-50 given its strong performance in video SSL [72]. We conduct these experiments in a variety of OoD setups including context shift 10 and 50 classes, source shift in both *UCF to HMDB* and *HMDB to UCF*, and animal domain actor shift. The results are presented in **Table R1 (attached pdf)**. To conduct these experiments, we perform linear evaluations using pretrained weights released by [72] and compare them to the ViT-B. These results confirm that our general findings based on ViT are also applicable when tested using ResNet-based architecture. We anticipate similar trends to hold when scaling the backbone. > Some findings are pretty much expected, but it's nice to have an easy test bed to quantify them We agree with the reviewer that some of these findings might be intuitive and as the reviewer pointed out there is no work that studies the robustness of video SSL models under real-world distribution shifts, in a unified setup. We would also like to point out that some of the findings are in fact quite counterintuitive. For example, we were surprised to find that: - *frozen contrastive encoders* perform poorly in open-set recognition, while achieving the best open-set performance when finetuned; - superior performance of v-BYOL frozen encoder compared to finetuned under source shift (HMDB/UCF); - consistently poor performance of v-MAE in linear evaluations. However, our additional investigation allowed us to further explore these findings and pinpoint the root causes of such intriguing behaviours. > VSSL pretraining hyperparameters and convergence We would like to point out that we have swept a broad range of parameters for each method to ensure that the hyperparameters are thoroughly tuned for each method. Below we provide a description for some of the key hyperparameter setups: - **Augmentation**: The optimal setup for augmentation varies amongst contrastive, non-contrastive, and masked autoencoder. Based on empirical findings, cropping ratios of [0.08, 1], [0.2, 0.766], and [0.5, 1] are used for contrastive, non-contrastive, and v-MAE respectively. Additionally, color-jittering, blur, or horizontal flip are not applied for v-MAE, but they are applied to other VSSL methods. - **Epoch**: Following [1, 6], We pretrain these methods *up to* 800 epochs and track their performance using InD validation sets (UCF101, Kinetics400) and notice that all the methods reach their optimal somewhere between 600 to 800 epochs. We use the best checkpoints of each method in the downstream tasks. - **Learning Rate (LR)**: We individually tune LR for all the methods through a grid search between 5e-5 to 1e-3. We find the optimal LR for v-MoCo, v-BYOL, and v-DINO as 3e-4, v-SimCLR as 2e-4, and v-SimSiam as 1e-4. Additionally, the predictor heads are trained with 10 times higher LR with respect to the base LR, to achieve optimal performance. - **Projector head**: The optimal configuration for the projector head also varies amongst VSSL methods, e.g., an MLP head of 4 layers is optimal for v-SimCLR and v-SimSiam, while an MLP head of 3 layers works best for v-MoCo, v-BYOL, and v-DINO. - **Predictor head**: the configuration of the predictor head is also adjusted, i.e., v-MoCo and v-SimSiam work best with a predictor head of just 1 layer while a 2-layer predictor head works best for v-BYOL. - **Others**: we also tune other hyperparameters like **weight-decay**, and **EMA**, among others to find the optimal configuration for each method. We present the hyperparameters related to each method in **Tables S1 and S3** in **Appendix B**. > Discuss limitations/caveats As discussed in Sec.7 of the paper, it would be of interest to further investigate VSSL methods under distribution shifts with larger Transformer or convolutional architectures, which we could not perform due to resource constraints. However, we believe our findings serve as a foundation for future works to study the behaviour of OoD robustness with model scalability. References: - [1] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training, Z. Tong et al., NeurIPS 2022 - [6] XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning, P. Sarkar et al., 2022 - [72] A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning, C Feichtenhofer et al, CVPR 2021 --- Rebuttal Comment 1.1: Comment: Thanks for your response. I've updated my score to reflect the additional experiments and discussion. I should add that some of these findings are not really surprising, e.g. v-MAE's poor performance in linear evaluations is reflective of the mis-match between the pretext task and linear eval, while this is less so for contrastive learning. So my view is that an easy test bed to quantify these is the more valuable contribution here and hope the authors release the code and make it easy to use - which will also be strongly impactful to the paper. --- Reply to Comment 1.1.1: Comment: Thanks again for all the valuable feedback, going over our experiments and discussions, and updating your score accordingly! We will release all the code and all model checkpoints and will strive to make them an easy test bed for understanding the robustness of VSSL methods.
Rebuttal 1: Rebuttal: We sincerely thank the review committee for their time and for providing constructive feedback. We are happy to see the overall engaging comments given by all the reviewers and glad to note that all reviewers find our work valuable to the community. We have carefully addressed all the concerns raised by the reviewers under the individual response section. Following, we provide a summary of our response. - **VSSL pretraining hyperparameter search**: In response to individual reviewers, we clarified that we have strived to tune all SSL methods studied in this work to achieve their best performance, involving best practices and/or grid search for augmentation setups, learning rates, batch size, weight decay, EMA, mask ratio, temperature, and the configuration of the projector heads, predictor heads, and decoder, among others. Please see the detailed response under **Reviewers iDRZ or jxy6** and a summary of pretraining setups is provided in Tables S1 and S3 in Appendix B. Additionally, all the models and codes will be released upon publication for reproducibility. - **Experiment on existing model checkpoints with a different architecture**: To analyze the generalizability of our findings beyond ViTs, we have conducted experiments to study the OoD robustness of the video SSL methods using existing checkpoints of Video ResNet-50 (Slow only). The results presented in **Table R1 (attached pdf)** show that our findings based on ViT are aligned when evaluated on a video ResNet. - **Verifying image vs. video pretraining**: In order to ascertain the benefits of video pretraining over models pretrained on static images, we have conducted additional experiments comparing the performance of image SSL vs. video SSL pretraining. Our comparison setups include 3 SSL methods MoCo-v3, DINO, and MAE across a variety of distribution shifts. The results presented in **Table R2 (attached pdf)** exhibit significant improvements when using video models compared to their image variants. - **Relation between InD vs. OoD performance**: We have now conducted additional statistical analysis investigating the relation between the models' performance in InD vs. OoD. The results presented in **Table R4 (attached pdf)** show a higher corr. when directly using the pretrained frozen encoders (linear probing) compared to finetuned, i.e., average corr. of 0.71 vs. 0.55. This indicates that finetuning is less beneficial under distribution shifts compared to InD. Please see the detailed response under **Reviewer D3xd**. - **Guidelines for future work**: To further help future research in developing robust and reliable video learning models, we shared some of our thoughts on future work based on the findings from this paper. In short, we find that a robust video representation framework should learn *local time-invariant* and *global time-variant* representations. Our intuition is that such representations are robust to context shift, viewpoint shift, and temporal perturbations among others. Moreover, such models can be designed by exploring a joint objective of masked reconstruction and contrastive/non-contrastive methods. Please see individual responses under **Reviewers jxy6 or E4vF** for more details. - **Summary table**: In **Figure R1 (attached pdf)** we provide a high-level overview of different VSSL models depicting their robustness and vulnerability under different evaluation setups in both InD and OoD. - **Pretraining with other large datasets**: We have now added results of VSSL methods pretrained on another large-scale dataset AudioSet consisting of 2M video. The results presented in **Table R3 (attached pdf)** exhibit a very similar trend to the pretraining of Kinetics700. Such strong relation confirms that our findings are likely to be aligned even when pretrained on other large datasets. Please see the detailed response under **Reviewer E4vF**. We hope that our responses adequately address all the points raised by the reviewers. We would be more than happy to address any additional comments the reviewers may have during the discussion phase. Pdf: /pdf/49f1a35ec1d0fc3ee47a0571a684122775e68bb8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Deep Optimal Transport: A Practical Algorithm for Photo-realistic Image Restoration
Accept (poster)
Summary: This paper considers the high perceptual image restoration problem. The authors propose a method to control the tradeoff between the perceptual quality and distortion (e.g. MSE) of a pretrained restoration model. The mehtod is developed based on a recent theory [1] on the tradeoff between MSE distortion and perception quality measured by Wasserstein-2 distance, which indicates that minimum MSE restoration under perfect perception constraint can be achieved by an optimal transport from an minimum MSE estimator to a perceptual estimator. Meanwhile, in theory the perception-distortion tradeoff, with perception quality measured by Wasserstein-2 distance and distortion measured by MSE, can be controlled by a linear interpolation between an minimum MSE estimator and a perfect perceptual estimator. Based on this theory, the authors propose to transport the output of a pretrained restoration model to improve the perceptual quality. The optimal transport is performed in the latent space under a Gaussian distribution assumption, with which closed-form optimal transport can be derived and, finally, the perception-distortion tradeoff is controlled by linearly transforming the first- and second-order statistics (means and covariances) in latent space of a pre-trained model. Experimental results on various image restoration tasks have been provided to demonstrate the effectiveness of the proposed method. Strengths: 1. The proposed method is interesting, which can be viewed as a post-processing method that, given a pre-trained restoration model, it can achieve tradeoff between perception quality (measured by Wasserstein-2 distance) and distortion (measured by MSE) by a simple training stage computing empirical fisrt- and second-order statistics (means and covariances) in the latent space under a Gaussian distribution assumption. 2. This paper is clearly written and easy to read. 2. Experimental results verified the effectiveness of the proposed method. Weaknesses: 1. Lack of theorectical novelty. The proposed method is based on the theory in [1] (see Section 3) and appears to be more of an extended evaluation of the results in [1]. Besides, the adopted appraoch that performs transport in the latent space is also not new and borrowed from existing work. 2. While the proposed method is interesting, its post-processing nature may limit its practical use since it requires a test-time training procedure. In comparison, one-stage restoration models would be preferred in practical applications. Although the test-time training only uses a dozen images restored by the pretrained model, retraining may be necessary for it to perform well when the degradation strength changes. 3. The assumption that the latent representations follow a Multivariate Gaussian distribution is relatively strong. The authors claim that their method can be applied to any pre-trained model. I’m afraid this assumption might limit its application as it requires the latent representation distribution to be (close to) Multivariate Gaussian. 4. Using optimal transport for image restoration is not new, e.g. for image denoising and super-resolution, which has demonstrated the capability of yielding high perception quality. A natural question is how does the proposed method compare with such one stage optimal transport based restoration approaches. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The authors mentioned the limitation in restoring human faces and text images, of which the pixels are highly correlated with each other. Is it because the patch size set to $p={3,5}$ in this paper is relative small and fail to extract mutual information while transporting? Another question is that the authors demonstrate that a larger patch size $7≤p≤15$ can yield slightly worse PSNR. Is it because the distribution is only close to MVG with small $p$? The assumption might be far from valid when $p$ gets larger, especially for face and text images, which are highly structured. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Lack of theorectical novelty. The proposed method is based on the theory in [1] (see Section 3) and appears to be more of an extended evaluation of the results in [1]. Besides, the adopted appraoch that performs transport in the latent space is also not new and borrowed from existing work. The reviewer is correct to point out that we extend the theory introduced by [(Freirich et. al)](https://proceedings.neurips.cc/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html). Note however, that this work is only theoretical and does not propose a practical way to achieve the Dmax estimator. Similarly, we do not hide the fact that our latent transport approach is inspired by prior works (see lines 50-53). Nonetheless, we introduce several key-improvements to the naive channel-wise transport that enable our algorithm to improve perceptual quality of *any* restoration algorithm and *any* inverse problem task with few unpaired examples to train on. As far as we know, these achievements are unprecedented in the literature, and thus we are confident the novelty in our work is significant. We will make sure to emphasize this in the **related-work Section**. > While the proposed method is interesting, its post-processing nature may limit its practical use since it requires a test-time training procedure. In comparison, one-stage restoration models would be preferred in practical applications. The reviewer's standpoint is interesting because it contrasts with that of the other reviewers, which identified the plug-and-play property of our approach as a strength rather than a weakness. In the vast majority of applications, it seems sound to assume that we can obtain a dozen of unpaired images to apply the test-time procedure, in which case all the one-stage restoration models we considered were improved by our algorithm. > Although the test-time training only uses a dozen images restored by the pretrained model, retraining may be necessary for it to perform well when the degradation strength changes. As noted by the reviewer, we apply the algorithm at test-time, on the output of a given model. Interestingly, retraining is necessary only if the pre-trained model itself needs retraining, in which case the few-shot procedure is likely to consume a negligible fraction of the computation and time resources allocated to retrain the one-stage model. We will make sure to clarify this in the revised version of the paper. > The assumption that the latent representations follow a Multivariate Gaussian distribution is relatively strong. The authors claim that their method can be applied to any pre-trained model. I’m afraid this assumption might limit its application as it requires the latent representation distribution to be (close to) Multivariate Gaussian. Actually, we perform the transport in the latent space of a Variational Auto-Encoder, which is precisely trained to achieve a normally distributed latent representation. Additionally and as pointed out by the reviewer, Gaussian close-form transport in the latent space is not new, and has proven its stability over a wide range of application. > Using optimal transport for image restoration is not new, e.g. for image denoising and super-resolution, which has demonstrated the capability of yielding high perception quality. A natural question is how does the proposed method compare with such one stage optimal transport based restoration approaches. Since our algorithm is conceived to improve an already existing method, it cannot be compared toe-to-toe with other one-stage approaches (whether they are transport based or not). Nonetheless, we can apply our algorithm to improve further the performance of these methods. > The authors mentioned the limitation in restoring human faces and text images, of which the pixels are highly correlated with each other. Is it because the patch size set to $p=3,5$ in this paper is relative small and fail to extract mutual information while transporting? This is an interesting idea. Note however, that the Encoder used to embed the images admits a large receptive field (typically 64 pixels depending on the configuration). Hence, it is safe to assume that the spatial correlations are conserved even with a small latent patch-size. On the other hand, we show in the **Limitation Section** that human faces and text are already largely distorted by the Variational Auto-Encoder (VAE). I.e., encoding and then decoding a clean image containing text (without any transport) typically results in deteriorated text. Therefore, we deem more plausible that the problem resides in the VAE rather than the patch-size. This is an important clarification we will add to **Section 6**. > Another question is that the authors demonstrate that a larger patch size can yield slightly worse PSNR. Is it because the distribution is only close to MVG with small $p$ ? The assumption might be far from valid when $7 \leq p \leq 15$ gets larger, especially for face and text images, which are highly structured. As discussed earlier, we use the latent representation of a VAE, therefore latent images' prior follows a MVG. The reviewer is correct to point out that larger patch-size typically result in a deterioration in the approximated distribution. As explained in **Appendices B.3-B.4**, The dimensionality of the prior grows quadratically with the patch-size, and therefore the number of covariance matrix parameters is proportional to $p^4$. With that, the number of samples available to compute these parameters shrinks quadratically with $p$. In this regard, it is much more challenging to accurately approximate the transport operator for $7 \leq p \leq 15$ compared to $p=3,5$. --- Rebuttal Comment 1.1: Comment: I thank the authors for their replies. I would like to keep my rating.
Summary: This paper presents an image restoration algorithm targeting at further restore the processed images that have been restored by pre-trained restoration models. As most of the restoration tasks use MSE as the main criteria for restoration, the restored images tend to be blurred to achieve better PSNR performance. This work takes advantage of optimal transport from source to target distribution to improve the perceptual quality, where the optimal transport operator is computed from distribution mapping from the restored image to original images. Strengths: + This work proposes an image enhancement method after the degraded images restored by pre-trained restoration models. + The tradeoff between MSE and perceptual quality is discussed and the balancing process between two criteria are formulated and visualized on multiple tasks. + Only a few unpaired images are required for the optimal transportation computing. Weaknesses: - As the proposed method uses a few images for optimal transport computing. How to guarantee that the distribution of test data is aligned with the training data. Does the selection on the training data have an impact on the performance of restoration? - In table 1, the numerical results of restoration with different alpha settings are given. For different restoration tasks, the alpha is set to different values while comparison. What criteria are used to select the parameter? - The experiments are conducted on ImageNet dataset. But for most restoration tasks, the restored images have relatively large resolution. Images in ImageNet dataset have considerable small resolution and its original images have been compressed with lossy compression. Why authors evaluate the method upon the task-specific datasets. - The method has similar mechanism with flow-based image deblurring or image sharpening. The methods itself has limited restoration capabilities on degraded image restoration. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the meaning of the term “D(E(x))” in table1 and figure6? Does it refer to the decoding results obtained from the encoded latent representation of x? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Authors have addressed the limitations of the method in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > As the proposed method uses a few images for optimal transport computing. How to guarantee that the distribution of test data is aligned with the training data. Does the selection on the training data have an impact on the performance of restoration? Our experiments showed that the **class of images** does not have a significant impact on the performance (e.g. one could use images of cars to improve images of dogs). However the **resolution** of images does play a *significant* role in attaining the best performance. I.e., to transport `512x512` images, it is best to use training images of the same resolution. This drawback is somewhat mitigated by the few-shot nature of the algorithm. We thank the reviewer for drawing our attention to this topic, and we will make sure to include this discussion in the revised version of the paper. >In table 1, the numerical results of restoration with different alpha settings are given. For different restoration tasks, the alpha is set to different values while comparison. What criteria are used to select the parameter? We selected $\alpha$ by observing the interpolation & extrapolation curves in **Figure 3** and chose the values with the most significant effect on performance. Like any other hyper-parameter, $\alpha$ can improve performance with some tuning when approaching a new task or dataset. We argue that the few-shot nature of our algorithm makes this tuning actually practical (it does not need to be set before performing some expensive training). In any case, $\alpha=0$ consistently improves perceptual quality for all tasks and models considered (as expected from the theory). We consider it to be a satisfying default choice, such that manually adjusting $\alpha$ is not too great of a concern. As other reviewers pointed out the lack of consistency in selecting the reported values of $\alpha$, we will make sure to clarify this in the experiment section, and additionally report the performance for $\alpha=0$ in **Table 1**. > The experiments are conducted on ImageNet dataset. But for most restoration tasks, the restored images have relatively large resolution. Images in ImageNet dataset have considerable small resolution and its original images have been compressed with lossy compression. Why authors evaluate the method upon the task-specific datasets. As pointed out by another reviewer, the perception metrics FID, IS and KID are not stable on common restoration datasets with small-to-medium size. Hence, we perform our evaluation on the 50,000 Imagenet validation samples, following very popular image restoration papers like ([Saharia et. al, 2021](https://arxiv.org/abs/2104.07636),[Rombach et. al, 2021](https://arxiv.org/abs/2112.10752)). This is also why it is impractical to perform a serious quantitative evaluation of the perception-distortion tradeoff on real-world datasets (e.g., SIDD, DND, RealSR), which have too few samples. Finally, we conduct the qualitative evaluation on popular samples from DIV2K or Set14. This is actually not a trivial question that was raised by another reviewer. We shall include these clarifications after line `194` in the paper. > The method has similar mechanism with flow-based image deblurring or image sharpening. The methods itself has limited restoration capabilities on degraded image restoration. There is actually much similarity with even older, classical works which apply a carefully chosen linear transformation on all overlapping patches of the degraded image. We consider this as an advantage of our method rather than a drawback. Applying linear transformations in the latent space could prove to be a powerful yet simple approach with interesting properties like few-shot learning and robustness. Our algorithm is not designed to directly restore degraded images but rather improve the perceptual quality of **any** restoration model at test time. In this regard, we demonstrate significant performance gains on a wide range of models (regression, GANs, diffusion) and tasks (super-resolution, denoising, JPEG artifact removal etc.).
Summary: This paper presents an image restoration algorithm that builds upon a trained network to further minimize MSE. To achieve this goal, an optimal transport was approximated by a linear transformation in the latent space. Visual results show clear improvement by using the proposed approach. Strengths: - The idea is straightforward as optimizing delta is common in traditional image restoration - The visual results are convincing Weaknesses: - The uncertain of the proposed idea is low so relatively the novelty is less. Technical Quality: 3 good Clarity: 3 good Questions for Authors: By applying the proposed algorithm another time, will the results be further improved? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `[Summary]` This paper presents an image restoration algorithm that builds upon a trained network to further minimize MSE We would like to clarify that the main goal of our algorithm is actually to improve perceptual quality. As a side-effect, we discovered empirically that we could extend the theory introduced in [(Freirich et. al)](https://proceedings.neurips.cc/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html) to also improve MSE, but this is not the main contribution of our paper. > The uncertain of the proposed idea is low so relatively the novelty is less. The main contribution of this work is a proposed algorithm that improves the perceptual quality of *any* restoration algorithm and *any* inverse problem task with few unpaired examples to train on. As far as we know, these achievements are unprecedented in the literature, and thus we believe that the novelty in this work is significant. > By applying the proposed algorithm another time, will the results be further improved? This is actually an interesting idea we tested on super-resolution when conducting our evaluations. As a matter of fact, the performance *does not* improve (it even degrades a bit) when applying the algorithm another time. The explanation is quite simple: After transporting once the test images using the VAE, their latent distribution aligns with that of the natural images. Hence, transporting another time does nothing (the transport operator is the Identity matrix). We are only left with the reconstruction error introduced by the encoding and decoding of the images, which degrades the MSE performance. We agree with the reviewer that this experiment adds to the reader's understanding and helps draw the limitations of our algorithm. We shall add it in the supplementary material. --- Rebuttal Comment 1.1: Comment: Author's respond explains my question well. I'll stick to my original rating.
Summary: The paper proposes an image restoration algorithm that can control the perceptual quality and/or the mean square error (MSE) of any pre-trained model, trading one over the other at test time. Strengths: 1. The method is plug-and-play, requires only a few samples, and does not require further training. 2. The method approximates the optimal transport by a linear transformation in the latent space of a variational auto-encoder, which is somewhat novel. Weaknesses: 1. The main concern is in the experiments section. (1) It would be better to evaluate the method on real-world datasets (e.g., SIDD, DND, and RealSR datasets), which may make more sense. (2) Quantitative results of $\hat{x}_{0}$ should be given in Table 1. (3) Quantitative results of some important ablation experiments (e.g., paired vs. unpaired samples, and transporting the degraded measurement directly) can be given. (4) LPIPS is generally regarded as a perception metric in image restoration tasks. Since FID, IS and KID are not very stable, they are generally not used in image restoration tasks. Just looking at PSNR, SSIM and LPIPS, the method doesn't seem to achieve a good distortion-perception tradeoff. 2. The interpolation constant $a$ seems to be adjusted manually, which may be inflexible. 3. Projected distribution loss [1] and sliced Wasserstein loss [2] also show better distortion-perception tradeoff in image restoration tasks, although it needs to be used directly for training. I don't know if the authors have tried this way. Further elaboration may be needed on how the proposed method relates to and differs from this loss. [1] Projected Distribution Loss for Image Enhancement. ICCP 2021. [2] Self-supervised learning for real-world super-resolution from dual zoomed observations. ECCV 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. I am willing to improve the score if the concerns are addressed well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations have been described in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (4) LPIPS is generally regarded as a perception metric in image restoration tasks. Since FID, IS and KID are not very stable, they are generally not used in image restoration tasks. Just looking at PSNR, SSIM and LPIPS, the method doesn't seem to achieve a good distortion-perception tradeoff. Please note that, *by definition*, LPIPS is a distortion metric, as it is evaluated on pairs of images. Interestingly, the original perception-distortion paper [(Blau & Michaeli, 2018)](https://openaccess.thecvf.com/content_cvpr_2018/html/Blau_The_Perception-Distortion_Tradeoff_CVPR_2018_paper.html) already classified the VGG loss - the ancestor of LPIPS - to be a distortion, on which the tradeoff exists. Also, the reviewer is correct to point out that FID, IS and KID are not stable on common restoration datasets, **but this is largely caused by these datasets' small size**. > (1) It would be better to evaluate the method on real-world datasets (e.g., SIDD, DND, and RealSR datasets), which may make more sense. Following the comment above, it is why we perform our evaluation on the 50,000 Imagenet validation samples, following very popular image restoration papers like ([Saharia et. al, 2021](https://arxiv.org/abs/2104.07636),[Rombach et. al, 2021](https://arxiv.org/abs/2112.10752)). This is also why it is impractical to perform a serious quantitative evaluation of the perception-distortion tradeoff on real-world datasets (e.g., SIDD, DND, RealSR), which have too few samples. LPIPS is a distortion, not a perceptual quality metric. Finally, we do conduct the qualitative evaluation on popular samples from DIV2K or Set14. This is actually not a trivial question that was raised by another reviewer. We shall include these clarifications after line 194. > (2) Quantitative results of $\hat{x}_0$ should be given in Table 1. (3) Quantitative results of some important ablation experiments (e.g., paired vs. unpaired samples, and transporting the degraded measurement directly) can be given. In **Table 1**, we preferred not overloading the reader with yet additional rows but rather have them focus on the way we can trade perception over distortion with different values of $\alpha$, which constitute the main result of the paper. However, concerns around the reported values of $\alpha$ were raised by other reviewers and we understand the importance of these results. Note that quantitative results of $\hat{x}_0$ are visible in **Figure 3**, but we shall add the exact performance in **Table 1**. We will also add the ablation figures in a complementary table in the appendix. > The interpolation constant $\alpha$ seems to be adjusted manually, which may be inflexible. Like any other hyper-parameter, $\alpha$ can improve performance with some tuning when approaching a new task or dataset. We argue that the few-shot nature of our algorithm makes this tuning actually practical (it does not need to be set before performing some expensive training). In any case, $\alpha=0$ consistently improves perceptual quality for all tasks and models considered (as expected from the theory). We consider it to be a satisfying default choice, such that manually adjusting $\alpha$ is not too great of a concern. As other reviewers pointed out the lack of consistency in selecting the reported values of $\alpha$, we will make sure to clarify this in the experiment section, and additionally report the performance for $\alpha=0$ in **Table 1**. > Projected distribution loss [1] and sliced Wasserstein loss [2] also show better distortion-perception tradeoff in image restoration tasks, although it needs to be used directly for training. I don't know if the authors have tried this way. Further elaboration may be needed on how the proposed method relates to and differs from this loss. The paper is interested in the MSE-W2 tradeoff following the transport theorem introduced by [(Freirich et. al, 2021)](https://proceedings.neurips.cc/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html). From a theoretical standpoint, it is not clear at all how we can obtain the Dmax estimator $\hat{x}_0$ using projected losses. In practice, we carefully designed our algorithm to improve existing restoration models at test time. Therefore, the aforementioned losses are not obvious candidate to boost performance. With that said, we absolutely agree with the reviewer that approximating the Dmax estimator in other perception-distortion planes is a most interesting topic and we are determined to investigate this in future research. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: After reading other reviewers' comments and the rebuttals, I raise my rating. Besides, for the method's effectiveness on real-world datasets, the authors can also utilize some deep learning-based image quality assessment (IQA) methods or user studies for perceptual evaluation.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a few-shot algorithm to obtain higher quality restored images of the given model like VAEs and diffusion models. Specifically, the optimal transport map in the latent space is computed through the representations of the real images and the reconstructed images. By applying the OT map, a better restoration can be obtained. Experiments show that the proposed method is effecitive. Strengths: The paper successfully applying the theory propose by [1] in image restoration and can generate high quality reconstructed images. Weaknesses: * The presentation of the paper is not good and some concepts are unclear. For example, * In line 116-117, what's the meaning of $x^*$ and $\hat{x_0}$ * Is $x^*$ sampled from $p(x|z)$ that achieves the minimal MSE between the reconstruction and the input? Similarly, how to define $\hat{x_0}$ and what's the meaning of max MSE error? * How to define $p_x, p_x^*$? * Line 121, OT plan should be between two distributions, not to data points. * Why does the unpaired dataset give better performance instead of paired dataset? It doesn't seem to make sense and the necessary explanation is needed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > In line `116-117`, what's the meaning of $x^*$ and $\hat{x}_0$ ? $x^*$ is the MMSE estimate, being the posterior mean. This solution gives the best MSE distortion performance, but at the cost of being of poor visual quality. $\hat{x}_0$ on the other hand, is the Dmax solution - being of perfect perceptual quality while giving the smallest distortion possible. Note that these are two extremes in the Perception-Distortion curve, which we refer to in **Figure 1** of the paper. We will update the text in line `116-107` to include these more detailed explanations. > Is $x^*$ sampled from $p(x|z)$ that achieves the minimal MSE between the reconstruction and the input? Similarly, how to define $\hat{x}_0$ and what's the meaning of max MSE error ? As $x^*$ is the MMSE, it cannot be obtained as a single sample from the posterior. Note, however, that $x^*$ can be approximated by drawing many samples from $p(x|y)$ and averaging them, as it is the posterior mean. This property is not being used in the paper. As for $\hat{x}_0$, it is defined in line `117` as the estimator attaining the *minimal* MSE while having perfect perceptual quality. This is not to be confused with the *notation* Dmax, referring to this MSE quantitiy, introduced by [(Blau & Michaeli, 2018)](https://openaccess.thecvf.com/content_cvpr_2018/html/Blau_The_Perception-Distortion_Tradeoff_CVPR_2018_paper.html), to refer to the maximal distortion on the perception-distortion curve (see **Figure 1**). We will clarify this nuance in the revised version of the paper. > How to define $p_x$, $p_{x^*}$ ? $p_x$ and $p_{x^*}$ are the probability distributions of the random variables $x$ and $x^*$. The reviewer is correct to note that the formal definition is lacking in the paper. As we state in **Section 3.2**, $x$ and $x^*$ are random variables defined over $\mathbb{R}^n$, so their definitions of $p_x$ and $p_x^*$ are often omitted for conciseness, like in [(Freirich et. al, 2021)](https://proceedings.neurips.cc/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html). We will add these definitions for completeness. > Line `121`, OT plan should be between two distributions, not to data points. The reviewer is correct, and the OT plan is indeed performed between $p_x$ and $p_{x^*}$ as the notation $T_{p_{x^*} \longrightarrow p_x}$ suggests and as we define in `101-102` following [(C. Villani, 2008)](https://cedricvillani.org/sites/dev/files/old_images/2012/08/preprint-1.pdf). We will clarify this better in the revised version of the paper. > Why does the unpaired dataset give better performance instead of paired dataset? It doesn't seem to make sense and the necessary explanation is needed. As explained in line `252-253`, it appears that using paired updates actually diverges from the algorithm introduced in **Section 4.1** and might introduce a statistical bias which hinders the covariance matrix estimation. As emphasized in the text, it is only a *hypothesis* and this phenomenon requires further research. Unfortunately, it is not the primary result of our algorithm and its formal explanation is out of the paper’s scope. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I still think the paper needs to be further polished before publication. Thus, I'll keep my rating.
null
null
null
null
null
null
Neural Lad: A Neural Latent Dynamics Framework for Times Series Modeling
Accept (poster)
Summary: The authors propose a new neural ODE framework, Neural Lad (Neural Latent dynamics model), that decomposes the differential function in three components: a NN differential function, an attention-based network, a time-dependency function and a graph convolution network for spatial correlations. They evaluate the method on short and long-horizon forecasting tasks for both univariate and multivariate time series data. The proposed method outperforms or performs on par with the existing Neural ODE, time-series transformer, and graph NN based models. Strengths: Originality: The work is a novel combination of known techniques and it is clear how it compares to existing work, for example, Neural CDEs. The related work section is well organized and all the related methods have been addressed, providing an adequate overview of the related work for the reader. Clarity: The submission is very clearly written. Therefore it is easy to follow the proposed method. Significance: The authors provide a novel NODE based method for long-horizon forecasting Weaknesses: Quality: The authors introduce a function $h_w(t)$ which should extract seasonality trend. However, neither mathematically nor experimentally, it isn't clear whether this function really achieves this. Traditionally trend is extracted as the mean value of the data, while seasonal part would be the remaining time series data, when trend is removed. From the introduced equations, where the output of the differential function is scaled by the time-dependency factor is at the moment unclear how this is achieved. Similarly, the authors introduce an attention-based network to model change of the input signal, but the obtained weight is multiplied with the 'memory matrix' rather than the input signal itself, thus it is unclear, what information does this memory matrix has learned that reflects the input signal. This matrix is also not visualized. Clarity: For eq. (1) and eq (2) I would recommend the authors to update the prediction to $\hat{x_{t:t+H}}$. So then in eq. (2) L2 loss is between the true observations and the predictions by the network. Line 92, the relationship for the fist function $\xi$ seems to be in the wrong direction, given whats defined in eq. (3). For the experiments section there are crucial points missing: how many datapoints used as input, dataset details, etc. This information should be provided in the supplementary material. Under the hyperparameter section there are also missing details on what type of solver was used, solver parameters, what type of optimizer, etc. Section 2.2. the block residual architecture is unclear from the text, I would recommend moving the figure from the appendix to main text. Significance: The results show that the proposed method indeed achieves an improved performance compared to the existing models. But I would like that the authors would have provided some further analysis, results on the different decomposition components, to strengthen their claims of the purpose of the different functions. Moreover, from a theoretical standpoint it is unclear under which conditions $f_\theta(z_t)$ still remains the correct underlying differential equation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Eq.6 The differential equation F() is decomposed in 3 components, differential function, attention-network, time-dependency network. This function, F() is used in the ODESolver that iteratively class this function for dt increments. For every increment, the model goes through some number of blocks (L), where the output of the differential function is scaled by the time-dependency function. Could you please clarify what is the output of $h_w(t)$, and why would you scale the state-dependent dynamics by it? Do you have some additional results where it can be seen that this multiplication identifies seasonal trends as seen in the data? Is the extrapolation (forecasting) autoregressive or you have to pre-define the output sequence length? For Eq.8 from self-attention perspective we have keys, queries in values. In relation to your work, I would identify the values as $\frac{dX}{dt}$ rather than M. Why do you apply the softmax weights to the memory matrix rather than the signal path, which you subsequently use to update the vector from the differential equation? What is the number of parameters, complexity, compute time of the proposed method? As cubic spline is an approximation method as well as the time expansion results in additional residual connections, how does it compare to existing NODE models and the transformer architectures? Experiments. For all experiments, in the final version, I would like to ask the authors to also report the std across multiple runs of the model. It is unclear how many timepoints were taken as input to for example to compute the attention-based network output? For simulated data, are there more visualization that showcase that the model has captured the linear trend in the data? Similarly, for the ablations on the simulated data when the $h_w$ component is removed, does the model then fail to capture the linear trend? Is there a clear pattern that the $g_v$ function is beneficial for time-series that have sudden changes, like weather forecasting? As the authors have changed the RHS of the differential equation, is there a theoretical proof that $f_\theta(z_t)$ still learns the correct underlying dynamics? What are the necessary/sufficient conditions under which this is the case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have briefly touched upon the training efficiency of Neural Lad, however, I would recommend the authors to include a more thorough discussion on this, especially as in the related work section this is mentioned as the major draw back for transformer based models, however, it is not explicitly discussed for the model at hand. This is information is also missing in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's recognition and valuable comments on the contribution of our work. We address the concerns in the following. **Weakness Part:** **Quality:** **Q:** The mechanism of time-dependent component **R**: The effectiveness of the designed time-dependent function $h_w(t)$ could be verified from various aspects. **i)** Empirically, according to our ablation study, Table.1 shows that using $h_w$ for extracting seasonality and trend could improve the prediction performance. **ii)** From the perspective of underlying mechanism, typically seasonality is a regular, cyclical, recurring fluctuation. Therefore, we propose to constrain the seasonality component of $h_w(t)$ to belong to the class of periodic functions, and the natural choice for the basis is the Fourier series. The trend is that most of the time it is a slowly varying function. In order to mimic this behaviour we propose to constrain to be a polynomial of small degrees, a function slowly varying across forecast window. Our modeling over seasonality and trend is reasonable and different from the traditional one using mean of time series and its residual. **iii)** With this careful design, the learned latent dynamics of $z_t$ could learn more sophisticated details than that without such design, as demonstrated in Figure 4 in Appendix. **Q:** The mechanism of attention-based network **Reply:** Our attention-based modeling over the change of the observed signal is inspired by memory networks [30] and matching networks [31]. The memory matrix $M$ should be learned, and can be seen as a memory bank, where each row represents a particular pattern describing one aspect of input signal's local change. The introduction of the memory matrix allows the model to combine different patterns of local changes, which could also help to provide more discriminative features particularly for the segments with abrupt changes. In Fig. 2 of Appendix, we visualize the memory weight and the input signals, we can observe the learned weights are obviously distinguished from others at sudden changed points. **Clarity:** **Q:** clarity of eq. (1) and (2) **Reply:** We would follow your advice for the notation, where actually the prediction $\hat{x}_{t:t+H} = G_{\Theta} (x_{t-L:t})$. **Q:** the wrong direction in eq. (3). **Reply:** We apologize for the typo here. The last equation of (3) should be $x_{t:t+H} = \xi(z_t)$, denoting the decoding process. We use the learned dynamics $z_t$ to predict the following $H$ steps, as illustrated by Figure 1 in Appendix. **Q:** Significance of the proposed components **Reply:** From the performance point of view, we present an ablation study in Table 1, from which we can observe both the time-dependent $h_w(t)$ and memory-based component for input signals improve performance. Also according to the visualization, we can see that the learned weights are more sparse than Neural CDE in Fig. 5, indicating Neural Lad tends to avoid overfitting. From a theoretical standpoint, we think the time-dependent component $h_w(t)$ makes the model learn the weights of basis expansion instead of direct fitting, therefore the network weights are sparse than neural CDE. Moreover, CDE only deals with the case when the hidden dynamic is linear to the change of input signal, which may fail when the control signal is non-linear. Therefore we propose an attention-based network to model the local change of observation signals. **Questions Part**: **Q:** "...Could you please clarify what is the output of $h_w(t)$, and why would you scale the state-dependent dynamics by it?" **Reply:** The output of $h_w(t)$ is a scalar, indicating the instant strength of periodicity and trend. This scaling is one modeling choice of incorporating the periodicity and trend into the dynamics of latent z. **Q:** "Do you have some additional results where it can be seen that this multiplication identifies seasonal trends as seen in the data?" **Reply:** Empirically, according to our ablation study for synthetic datasets with very strong peoridicity and trend property, the experiment results in the Table.1 shows that using $h_w$ could improve the prediction performance. Additionally, with the employment of this scaling, the learned latent state $z$ shows more fine-grained details, as plotted in Fig. 4 of Appendix. **Q:** Running time of Neural Lad. **Reply:** We add the complexity analysis and computational time comparison with other approaches, as shown in Figure 1 in the attached pdf file. We can observe that the NeuralLad converges faster than STG-NCDE, so it achieves better performance earlier than baselines. For the prediction accuracy comparison with existing neural ODE and transformer models, we show them in Table 1-4 for different tasks in the main paper. **Q:** "Experiments. For all experiments, in the final version, I would like to ask the authors to also report the std across multiple runs of the model." **Reply:** For the current version, as we fix the random seed the same as the baseline models, the experiment results are fixed when running multiple times for both uni-variates and multi-variate time-series forecasting tasks. Indeed, we will also report the std across multiple runs for different random seeds. **Q:** Visualization of toy data **Reply:** In the right two panels of Figure 3 in main paper, we show that without consideration of trend and seasonality, the neural CDE model underestimates the rising trend near the peak and thus cannot capture the true waveform accurately. **Q:** whether neural lad can learn the correct dynamics? **Reply:** It is straightforward to show that our model can still learn the correct underlying dynamics, since our Neural Lad is a non-trivial generalization of original neural CDE that has already been proved to be able to sufficiently expressive. The details on the datasets and hyperparameter settings are described in Appendix --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and clarifications on my questions. I will raise my score. In the final version I would still like to see the model performance across different initialization seeds. And I would still like to see the following ablation: a visual result of the generated time series by Neural Lad for Table 1. with (a) only $h_w$ and (b) only $g_v$. As one would assume that in setting (b) the model with not capture periodical/trend property; while in (a) the latent dynamics should be less fine grained as the observed signal does not affect the latent dynamics. (Correct me if these assumptions are wrong). --- Reply to Comment 1.1.1: Title: More experiments and visualizations Comment: We appreciated your recognition on our clarifications and further suggestions that could definitely make a better version of our work. As you suggested, we will add more descriptions on the statistical results on the model performance and visualizations to show our model can learn more details than that without consideration of periodicity/trend property.
Summary: This paper addresses the problem of characterizing the local change of observed signals and ignoring inherent periodical property in time series forecasting tasks. A new neural ODE-based framework is proposed with 1) a decomposable latent space for time-dependent dynamics and 2) an attention-based design for local changes in observation. The framework is further extended to multivariate settings using graph-based networks to adaptively learn the spatial correlation. Experiments have been presented on both univariate and multivariate settings to demonstrate the improved forecasting performance of the proposed model. Strengths: 1. The presentation of the dynamic model is clear and well-structured. The attention-based networks handle the local change of the signal and the time-dependency function characterizes the seasonal and trend properties of the signal. 2. Empirical results show improved forecasting performance of the proposed model on both synthetic and real-world datasets. Weaknesses: 1. I feel that the approach is somewhat incremental from the perspective of the methodology in that it is an extension of the previous works (e.g. Kidger et al 2020, Choi et al 2022), in combination with a decomposable form of latent dynamics and attention-based feature extractor. Could the author elaborate more on how the proposed method differs from the previous works? 2. Experimental settings could be more elaborated: 1) The motivation of choosing baseline models in Table 2, Table 3, and Table 4, and how the proposed model improved could be more detailed; 2) The author should also provide more information about datasets used in the two univariate and multivariate settings in terms of their local property or seasonal features. I also think the ablation study is not sufficient: 1) There should be evidence to prove the benefits of using attention-based networks for local changes; 2) The detail of improvements in spatial relationships should also be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the weaknesses mentioned above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's recognition and valuable comments on the contribution of our work. We address the concerns in the following. **Weakness Part:** **Q:** "I feel that the approach is somewhat incremental from the perspective of the methodology in that it is an extension of the previous works (e.g. Kidger et al 2020, Choi et al 2022), in combination with a decomposable form of latent dynamics and attention-based feature extractor. Could the author elaborate more on how the proposed method differs from the previous works?" **Reply:** As shown in the reply to reviewer LTR9, the Neural Lad is a new member of neural ODE family. The idea of constructing a decomposable design for latent dynamics for time series modeling is novel and effective according to our knowledge over literature. We emphasize that our model choice over the latent dynamics F(·) is drastically different from othe neural ODE family members in which only $f_θ(z_t)$ was considered, as shown in Figure 1. The entire model can be thought as a continuous analogue of recurrent neural networks with layerwise adaptivity. Here the decomposability assumption allows us to maintain a simple yet effective design over the latent dynamics without loss of expressivity, and its effectiveness has been verified empirically. Compared to Neural CDE (Kidger et al 2020,) and STG-NCDE (choi et al 2022), one contribution is to model the explicit time dependence on time $t$ with the decomposable time dynamics. In Figure 5 of Appendix, we can observe that learned linear weights and convolution weights are more sparse than Neural CDE by considering the seasonal and trend time dynamics, which indicates that the proposed component can capture the hidden dynamics better so it is not necessary to fit the future with more parameters. The other contribution is to use a memory network on the control gradient to capture the local change of the observed signals, which can improve the performance with a large margin (shown in the Table 1), and model the sudden change of input signals (visualized in Figure 2). **Q:** "Experimental settings could be more elaborated...." **Reply:** 1) The principle of selecting baselines for comprison is that we identify recently works that have competitive and even state-of-the-art prediction performance. Concretely, in Table 2, the PhysionNet sepsis classification task, we choose the same baselines as Neural CDE. In Table 3, for the univariate time-series forecasting, we choose three kinds of models including the widely-used transformer models (Autoformer, Fedformer), the light-weighted linear network (LightTS, DLinear) and Neural CDE network (STG-NCDE). In Table4, for the multi-variate time-series datasets, we choose the graph time-series model such as STGCN, AGCRN, DSTAGNN and STG-NCDE as baselines. 2) Regarding the details of used datasets for testing, we will follow your advice and add more description to make them more self-contained in the revision. Thank you for your valuable suggestions. 3) Regarding the ablation study:from the performance perspective, we show the benefits of attention-based network in Table 1. Specifically, the $g_v$ in Table 1 is the memory network component, we can observe that the MAE degrades from $2.31$ to $1.44$ on horizon $12$ and from $3.37$ to $2.07$ on horizon $96$ by only adding $g_v$ to STG-NCDE, which demonstrates the benefits of memory network. From the explanation perspective, we visualize the time-series and attention weight of the memory network in Figure 2 in Appendix, we can observe that the learned weights are extremely sparse and distinguish from others when the time-series has sudden changes. The benefit of considering spatial relationship has already been verified by STG-NCDE that can improve the performance of Neural CDE with a large margin on the traffic dataset, for example, STG-NCDE improves Neural CDE from $20.44$ to $15.57$ on PEMS3, and from $26.31$ to $19.21$ on PEMS4. Neural Lad can be seen as a non-trivial extension over STG-NCDE, which also enjoys the advantage of considering spatial relationship, as shown in Table 2. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification and additional details about the experimental setting.
Summary: This paper presents a new framework for modeling time series using a controlled latent neural-ODE-based dynamics model. The proposed latent dynamics function uses a special factorized structure, which effectively disentangles the influences of time (via a periodic basis expansion to capture periodic patterns), the current latent state, and the input signal's history (leveraging an attention-based architecture). Both uni- and multivariate versions of the model are presented. Numerical validations are conducted across a variety of tasks and benchmarked against a number of neural ODE and transformer variants. Strengths: * The paper is generally well written and easy to follow. * The numerical validation is quite extensive, spanning over tasks of different natures and (short- and long-term) prediction regimes. The proposed method is compared against a wide range of variants in the neural ODE and transformer model families. Weaknesses: * The idea itself is not the most novel, involving a simple factorization and model architectures drawing cues from popular ones already existing in the literature. However, this is largely compensated by the exhaustive numerical validations where consistent improvements are observed. * Discussion on the computation costs and potential limitations is missing. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Can you comment on the cost aspects (i.e. training, inference and memory footprints) of the Neural Lad models? I am curious how it compares to the baselines tested in the paper. * The notation across Equations (13-14) is slightly unclear to me. How does $B(z_t, t)$ in equation (14) relate to the $B$ operations in Equation (13)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Not much discussion on the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's recognition and valuable comments on the contribution of our work. We address the concerns in the following. **Weakness Part:** **Q:** "The idea itself is not the most novel, involving a simple factorization and model architectures drawing cues from popular ones already existing in the literature. However, this is largely compensated by the exhaustive numerical validations where consistent improvements are observed." **Reply:** We admit that our model is not a completely new model, since it is a new member of neural ODE family. However, **the idea of constructing a decomposable design for latent dynamics for time series modeling is novel and effective** according to our knowledge over literature. We emphasize that our model choice over the latent dynamics F(·) is drastically different from othe neural ODE family members in which only $f_θ(z_t)$ was considered, as shown in Figure 1. The entire model can be thought as a continuous analogue of recurrent neural networks with layerwise adaptivity. Here the decomposability assumption allows us to maintain a simple yet effective design over the latent dynamics without loss of expressivity, and its effectiveness has been verified empirically. Compared to Neural CDE (Kidger et al 2020,) and STG-NCDE (choi et al 2022), one contribution is to model the explicit time dependence on time $t$ with the decomposable time dynamics. In Figure 5 of Appendix, we can observe that learned linear weights and convolution weights are more sparse than Neural CDE by considering the seasonal and trend time dynamics, which indicates that the proposed component can capture the hidden dynamics better so it is not necessary to fit the future with more parameters. The other contribution is to use a memory network on the control gradient to capture the local change of the observed signals, which can improve the performance with a large margin (shown in the Table 1), and model the sudden change of input signals (visualized in Figure 2). **Q:** "Discussion on the computation costs and potential limitations is missing." **Reply:** We add the complexity analysis and computational time comparison with other approaches in Figure 1 in the attached pdf file. We run all experiments on a Tesla a100-80g GPU. The training time of Neural Lad for the toy dataset is about 8s per epoch, the forecasting time is 0.4s for every validation iteration (0.4/265=0.0015s). For large real-world traffic datasets (such as PEMAS03, PEMAS04), the training time is 2-3 minutes per epoch. We visualize the training process on Toy dataset in Figure 1. We can observe that the NeuralLad converges faster than STG-NCDE, so it achieves better performance earlier than baselines. **Questions Part:** **Q:** Can you comment on the cost aspects (i.e. training, inference and memory footprints) of the Neural Lad models? I am curious how it compares to the baselines tested in the paper. **Reply:** Compared to its main baseline neural CDE, our model does not increase too much overhead; remarkably, NeuralLad converges faster than STG-NCDE, so it achieves better performance earlier than the baseline, as shown by Figure 1 in the attached pdf file. **Q:** "The notation across Equations (13-14) is slightly unclear to me. How does $B(z_t, t)$ in equation (14) relate to the $B$ operations in Equation (13)?" **Reply:** The basis expansion component $B(z_t, t)$ in eq.(14) is a multi-layer stacked residual network as shown in eq.(13), where the $L$ is the number of layers of the basis network. In addition, the structure of $B(z_t, t)$ is shown in Fig. 1.(a) in Appendix. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications and providing additional information on the computation costs.
Summary: This paper propose a novel neural ordinary differential equation framework for time series modeling. The main contribution is in the design of latent dynamic function $F(\cdot)$ which can be decomposed into hidden state dynamics $f_\theta(z_t)$, time-dependency with periodical and trend property $h_w(t) $, and attention-based network to model signal to latent dynamics effect $g_v(x_0:t)$. Also, they showed that the model can be extend to a multivariate time series forecasting. The proposed design is theoretically sound and authors proved them to be effective on synthetic and real data. Strengths: - The paper is well-written and the contribution is well stated. - The proposed design is compared thoroughly with different types of baseline in various synthetic/real data and outperforms baseline in most cases. Weaknesses: - Analysis on the computational complexity/cost is missing Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Synthetic data seems to have only trend dynamics. It would be nice to see the result from seasonal(periodic) dynamics Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's recognition on the novelty and contribution of our work. We address the concerns in the following. **Weaknesses Part**: **Q**: Analysis on the computational complexity/cost is missing **Reply**: We add the complexity analysis and computational time comparison with other approaches, as shown in Figure 1 in the attached pdf file. We run all experiments on a Tesla a100-80g GPU. The training time of Neural Lad for the toy dataset is about 8s per epoch, the forecasting time is 0.4s for every validation iteration (0.4/265=0.0015s). For large real-world traffic datasets (such as PEMAS03, PEMAS04), the training time is 2-3 minutes per epoch. We also visualize the training process on Toy dataset in Figure 1 . We can observe that the NeuralLad converges faster than STG-NCDE, so it achieves better performance much earlier than the baseline. **Questions Part**: **Q**: Synthetic data seems to have only trend dynamics. It would be nice to see the result from seasonal(periodic) dynamics **Reply**: The generative formula of the synthetic data is $x_{i,t}= \sin(2\pi b_{i,t} t + \phi) + n_{i, t}$, **including both periodicity and trend dynamics**, where the changing of frequency $b_{i,t}$ and amplitude $a_{i,t}$ represent the seasonality and trend, respectively. --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed information on the computation costs.
Rebuttal 1: Rebuttal: We thank all the reviewers' recognition and valuable comments on our work. We have carefully responsed the concerns raised for each reviewers, including clarification of the novelty, adding more exprimental results. We attached the additional experimental results as a pdf file for further check. Pdf: /pdf/d4fb106e27e6d36e859722bbdf7a79ba9ef63a0f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Riemannian Laplace approximations for Bayesian neural networks
Accept (poster)
Summary: This paper develops a Riemannian Laplace approximation, which is a Laplace approximation that takes into account the Riemannian geometry of the loss surface. The contributions of this paper are as follows: i) showing that such a loss-aware Laplace approximation is better able to capture the true posterior (and predictive); ii) presenting the Riemannian geometry framework for the Laplace approximation, with the Hessian in both the normal and tangential space; iii) a practical algorithm for efficiently integrating the required ODE; iv) experimental evidence on several commonly used datasets. Strengths: The paper is for the most part very well written, it is for the most part easy to follow (main idea and background). And the authors give several examples (including figures) to help the readers further. The experimental section is decent, having both a toy example that helps gain intuition and a quantitative evaluation on several standard datasets, where the method is benchmarked against vanilla Laplace approximation and the MAP estimate. I think for the purpose of this paper, the various alternative approximate inference algorithms would not be necessary to compare to, as this is a direct extension of Laplace. Weaknesses: Even though the paper is generally very well written, I did, however, struggle with some parts of the main section. As a reviewer who is very familiar with Bayesian neural networks and the Laplace approximation but less so with Riemannian geometry, I would prefer to have a more in-depth background section on Riemannian geometry. I think the Laplace approximation section can be shortened if space is needed (e.g. tricks of the trade and strengths and weaknesses could be shortened). The main weakness is the scalability of the method (see limitations). Solving the ODE takes a very long time. According to Fig. (4), this is in the order of tens of seconds for a mini-batch. Even for very small neural networks. The models used in the experimental section are tiny, e.g. single hidden layer networks with only 50 hidden units. For the CNN, the authors mention that they use 2 conv layers, but I didn't find the kernel size. It is also not clear to me if and which approximation the authors use for the Hessian. Can the complexity be reduced e.g. with a diagonal Hessian approximation? How would this compare to a diagonal or Kronecker-Factorised approximation of the standard LA? Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Figure 4, why does the NLL have a minimum and not strictly decrease with larger mini-batch size? Shouldn't the estimate become more and more precise? Is the wall-clock time really on the order of tens of seconds? What is the model size? For the CNN, what is the kernel size? Which approximation to the Hessian do you use for the Riemannian Laplace approximation (e.g. Diagonal or Kronecker-Factorized)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method has been applied to very small neural networks and the authors argue that this is because the method is computationally very expensive. While the authors acknowledge this in the limitations, it potentially limits the applicability to realistic neural network model sizes. It would be important to know more precisely what the computational complexity is (also as a function of the model size). Furthermore, I would like to see a comparison in wall clock time for the entire approximation compared to standard Laplace approximation, not only for different mini-batch sizes, but also for different model sizes, including large model sizes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer dyBq We would like to thank the reviewer for their positive consideration. We appreciate the time and effort spent in reviewing our paper. We addressed all the remaining concerns below. **Point 1 / Weaknesses** > *“Even though the paper is generally very well written, I did, however, struggle with some parts of the main section. … I think the Laplace approximation section can be shortened if space is needed (e.g. tricks of the trade and strengths and weaknesses could be shortened).”* Thank you for the feedback. The paper is based on two unrelated fields Bayesian NN and differential geometry. Due to space limitations we included in the main paper the most important information needed for the understanding of the idea, while thorough discussion and further information have been moved in the appendix. We will take into account the feedback and update the camera-ready version accordingly. **Point 2 / Weaknesses and Questions** > *"It is also not clear to me if and which approximation the authors use for the Hessian. … How would this compare to a diagonal or Kronecker-Factorised approximation of the standard LA?"* In all the experiments, we consider the full Hessian for computing the covariance of the Gaussian approximation both for classic and linearized LA and our method. This is computed using the Laplace library which computes the GGN approximation. Therefore, for our method we are sampling the velocities from a Gaussian approximation obtained by computing the full Hessian. In the ODE solver, since we rely on “hpv” of functorch, which also uses the full Hessian in a computationally efficient way, we avoid materializing it. While for complexity we can refer to the general comment, the influence of the choice of the hessian approximation should be futher investigated, but empirically we have seen that diagonal covariances imply faster geodesics (see benchmark in the attached PDF). We will add also a comparison in the camera-ready appendix, where we will analyse how sampling the initial velocities from less accurate Hessian approximations will affect all models. **Point 3 / Questions** > *In Figure 4, why does the NLL have a minimum and not strictly decrease with larger mini-batch size? Shouldn't the estimate become more and more precise? Is the wall-clock time really on the order of tens of seconds? What is the model size?* Thank you for pointing out this issue, we agree that it can be confusing. In this plot we report the test NLL i.e., the negative log-likelihood on test unseen data. Indeed, the train NLL should follow the behavior you described as the more points in the batch, the closer the sampled functions to the MAP estimate. However, the MAP may not be the optimal model for the test distribution. Instead, using a (small) batch when solving an ODE system allows our method to generate functions similar to the MAP, but which exhibit some variability and potentially generalize better. Indeed, if the batch is still representative of the whole dataset, the sample will differ from the MAP mostly close to the decision boundary and away from data. Regarding the implementation, the result corresponds to the Banana experiment in Sec 5.2. We will update the plot and the description accordingly. **Point 4 / Questions** > *“For the CNN, what is the kernel size?”* We used a $5\times5$ kernel, as reported in Appendix D.5 --- Rebuttal Comment 1.1: Title: Review score update Comment: Thank you for your response, addressing my main concerns and questions. I decided to increase my score from 6 to 7.
Summary: The paper presents a Laplace approximation for Bayesian neural networks that adapts the covariance to the local geometry of the loss, effectively overcoming the quadratic approximation of the loss. The authors report competitive performance with the standard Laplace approximation (both Monte Carlo sampled and linearized) on regression and MNIST-scale classification problems as well as a reduced reliance on tuning the precision of the prior. The approach is explained clearly and makes a lot of sense (at least with my rather superficial understanding of Riemannian geometry), it is applicable in more general probabilistic models and I would expect it to lead to various follow-up works. While methodologically this is a very nice paper, I feel like it is let down by the empirical evaluation. The method is only tested on UCI and MNIST-scale datasets, which are hardly relevant for deep learning these days. The authors mention the computational cost of their method, but only discuss the reasons superficially without providing exact benchmark figures to give a sense of where the main bottlenecks arise. Given the apparent computational limitations of the method, an experiment with a non-NN model could have strengthened the paper. All things considered, the strong methodological contribution outweighs the unconvincing empirical evaluation for me, so I would lean towards acceptance, although I wish could have given the paper a much higher rating. Strengths: * The core idea makes a lot of sense, is applicable beyond inference in neural networks and seems to work well for the experiments that are considered. It has the potential to address the rather restrictive approximation of a quadratic loss in the Laplace approximation. * I am confident that the paper will inspire various pieces of follow-up work. * The paper is well-structured and -written. * Effective use of illustrative examples throughout. Weaknesses: * Only small scale problems are considered in the experiments * This is exacerbated by lack of analysis of the computational cost. It is not really clear to me what specifically is preventing the method from being applied to larger networks and datasets (even something like CIFAR with ResNets would have been great). The discussion mentions scaling issues w.r.t. number of datapoints and parameters, but the relative behavior is not benchmarked at all and it is also not clear to me how much time the ODE solver actually spends e.g. calculating the Hessian-vector products vs computations independent of that. Given that this paper opens up a new direction for research, I think there should be much clearer pointers as to where exactly the current bottlenecks and limitations are and where improvements can realistically be achieved. * There is no experiment demonstrating the efficacy of the approach on a non-NN probabilistic model. Given the apparent computational cost of the method, this would have seemed like a rather natural experiment to include and neural networks don’t seem like a particularly good fit for the approach. * The explanation of why the method would work better with a mini-batch rather than the full dataset (section 3.3/fig 4) is not exactly clear and seems rather hand-wavy. * Lack of HMC ground-truth baselines for the regression problems Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * What are the absolute runtimes of each method per sampled prediction in the expeirments? It would be great to get a better sense of this as a function of dataset/minibatch size and number of parameters/network size (I could see a synthetic experiment be illuminating here). * How much of the cost of the method lies in calculating the Hessian-vector products? These might take up a significant chunk of the total ODE compute time, so I’m wondering if explicit Hessian approximations (last-layer, KFAC, subset, ...) might improve scalability? * Could you elaborate on the discussion at the end of section 3.3? In particular I don’t follow how using the full dataset would over-regularize the geodesic as judging from the $N/B$ factor in the inline equation you seem to correctly rescale the mini-batch loss to match the full loss in expectation (note: I assume by ‘over-regularize’ you mean concentrating the samples around the mean, i.e. effectively reduce the entropy of the Gaussian approximate posterior). Any ideas for overcoming this over-regularization? * Could you comment on the results for the Riemannian Laplace approximation being noticeably better on the UCI datasets with a non-optimized precision? **Minor note**: the x axis labels in Fig 4 are cut off at the bottom Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Only discussed superficially and in a quite hand-wavy manner, I would have wanted to see concrete benchmarks and an analysis of how the compute time evolves with increasing dataset size and number of parameters respectively as well as some evidence from the literature that a tailor-made ODE solver could indeed allow the method to take the step from small conv nets to more modern architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer BssD We thank the reviewer for the positive consideration of our work and constructive feedback. We appreciate the time spent to review our paper. We address all your concerns below and we refer to the general comment for the questions regarding scalability. **Point 1/ Weaknesses** > *“There is no experiment demonstrating the efficacy of the approach on a non-NN probabilistic model. … good fit for the approach.”* Indeed, our model can be also used to approximate posteriors other than BNNs. We have already in the paper two constructive examples: the Rosenbrock function (Fig. 1) and the logistic regression (Fig. 3). We are happy to include more examples in the camera-ready appendix. The reasons why we decided to focus on the BNN problem are due to the challenges, the potential impact and the research questions that arise. Even with the current basic setting, our approach performs on-par/better than linearized LA, which is considered among the strongest approximations for turning NNs into BNNs. We also note that our method is interpretable compared to linearized LA, which performance is not yet theoretically understood. **Point 2/ Weaknesses** > *"The explanation of why the method would work better with a mini-batch rather than the full dataset seems rather hand-wavy"* We agree that we have not yet fully analyzed the influence of batching on the result. The terminology can lead to misunderstanding, but for “batching” in this setting we refer to solving each ODE only using a subset of the data. In the context of this paper we propose this as an “obvious and simple” way to scale the method. However, further research should be conducted to properly analyze the behavior. We believe that this is closely related to the generalization concept in deep learning, when stochasticity is induced in the training algorithm. **Point 3/ Weaknesses** > *"Lack of HMC ground-truth baselines for the regression problems"* Thank you for the suggestion. We added the predictive distribution obtained using HMC on the attached pdf. We will add them in the appendix too. **Point 4 / Questions** > *"What are the absolute runtimes of each method per sampled prediction in the expeirments? ...(I could see a synthetic experiment be illuminating here)"* Thank you for the suggestion. We provide some initial results in the attached PDF, and we will include further analysis in the camera-ready version. We also briefly mention the challenges that influence this benchmark on the general comment. **Point 5 / Questions** >*“How much of the cost of the method lies in calculating the Hessian-vector products? … might improve scalability?* This is a great suggestion for future research. Indeed approximations to the metric and/or the ODE are of particular interest to reduce complexity, as we also mentioned in the general comment. **Point 6 / Questions** > *"Could you elaborate on the discussion at the end of section 3.3? ... Any ideas for overcoming this over-regularization?"* Indeed, the over-regularization means that due to the linearization and the prior precision optimization, especially if precision gets a high value, the low loss region concentrates closely around the MAP. Therefore, our samples are generated only near the MAP, and this bias limits the variability of the sampled functions. This is for example not happening in our standard approach. We believe that the batching is an interesting way to alleviate this issue as it seems empirically to be beneficial. However, it poses challenging questions and perhaps future insights. For example, there might be a correlation between the quality of the MAP with respect to generalization, and the sampled models associated with the loss surfaces implied by each batch. **Point 7 / Questions** > *“Could you comment on the results for the Riemannian Laplace approximation being noticeably better on the UCI datasets with a non-optimized precision?”* If the optimized prior precision is high, this corresponds to a stronger L2 regularization, which implies that more models will be generated near the MAP, and this is true even for our model. When the prior is not optimized, our model is capable of generating models with higher variability, which helps to calibrate the uncertainty better. Therefore, the test NLL is expected to be better in our case, as points near the boundary that are missclassified do not get high confidence. Instead, this is the case when the prior is optimized, since the sampled functions are closer to the MAP. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your comments and providing runtime results and HMC references. I would still really love to see a quantitative non-NN comparison to the regular Laplace approximation. I unfortunately do not have a specific one in mind to suggest, but would have a look through the tutorials/examples of a couple of probabilistic programming frameworks that focus on MCMC for inference. I'm sure they will have comparisons where sampling works much better than VI with a Gaussian/Laplace, and it would be interesting to see to what extent using your Riemannian approach to adapt to the posterior closes the gap (assuming it does). I would also be curious how the method compares e.g. to normalizing flows in such a lower dimensional case. I understand the temptation of wanting to do neural nets first and foremost, but I think there is a really clear path for potential applications with more traditional probabilistic models, whereas BNNs will require more work on scalability. Both are interesting in terms of research of course, but for the impact of the paper it would, at least in my opinion, make a lot of sense to cover the former empirically. Overall and in light of the other reviews with there being a consensus for acceptance, I remain with my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment: Thanks again for your comments and suggestions. We agree with you that a quantitative non-NN comparison to the regular Laplace approximation would be interesting. We also agree that exploring how our approach performs in more traditional probabilistic models instead of BNNs would be interesting to cover. While we are currently looking for examples in the literature to test the latter, we have conducted preliminary experiments in the 2D Rosenbrock density. Following [1], where they define how to get sample from that density, we measured the Wasserstein distance between HMC, LA, and our approach from the true samples. Results are in the table below and they are commputed using 5000 samples. | Method | Wasserstein distance | | -------- | ------- | | HMC | 7.189 | | Our | 8.398 | | LA | 31.194 | [1] Pagani, F., Wiegand, M., & Nadarajah, S. (2019). An n-dimensional rosenbrock distribution for mcmc testing. arXiv preprint arXiv:1903.09556.
Summary: This paper presents a novel Laplace Approximation for Bayesian Neural Networks. A key insight is to examine the local loss landscape with a Riemannian metric, which is determined by the gradient of the log posterior. Using this metric and an exponential map, a Laplace Approximation technique is developed to draw posterior samples that fall into regimes with low negative log posterior. The paper also develops a sampling method, which relies on the 2nd order ODE solver. Several experiments are conducted. When compared to the standard Laplace Approximation, evidences are provided to illustrate the improvements. Strengths: - the contribution provided by this work is original and novel to the best of my knowledge. - the paper is polished well generally. Despite that the materials are developed on differential geometry, intuitions are relatively provided well. - Laplace Approximation has been increasingly popular in recent years. Such extensions to incorporate Riemannian geometry could be relevant to the Bayesian Deep Learning community. Weaknesses: One complaint about the paper is that, without referring to the appendix, it is difficult to comprehend the material fully. For example, in section 3.3, I wish that the connection between an ODE and Riemannian metric is difficult to understand directly. Differential geometry is not often thought in engineering courses at many universities. It may make sense to recap the essential concept in the main paper. How the method could be used for linearized Laplace Approximation is made very short. Another point for improvement is the choice of the baselines. It would make sense to include a deep ensemble and MC-dropout as a minimum. This could show how far the proposed Laplace Approximation can compete with popular methods in practice. In the experiments, the paper could improve on analyzing the computational complexity, in comparison to the standard Laplace Approximation. DaxBerger et al 2021 claim that the major benefit of Laplace Approximation is on simplicity, it would be great to project this paper's method more on the plateau of quality of uncertainty Vs computational complexity. While it is great for research, I think the methods based on differential geometry may have certain drawbacks among practitioners. The paper could be more convincing by providing the gains in uncertainty, but additional overhead due to the added complexity of the pipeline. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is a limitation section at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer am4X We would like to thank the reviewer for their thoughtful consideration of our work. We appreciate the time you took to review our paper. We have taken the time to address all the points raised under Weaknesses. **Point 1/ Weaknesses** > *"...without referring to the appendix, it is difficult to comprehend the material fully."* > *"...How the method could be used for linearized Laplace Approximation is made very short."* Thank you for the feedback. The paper is based on two unrelated fields Bayesian NNs and differential geometry. Due to space limitations thorough discussion and further information have been moved in the appendix, as well as a concrete example for the linearized version of our method. As our standard approach usually works better than the linearized version and is interpretable, we mainly focus on that in the paper. However, we will take into account the feedback and update the camera-ready version accordingly. **Point 2/ Weaknesses** > *"Another point for improvement is the choice of the baselines. … compete with popular methods in practice"* In related works where extensions of LA are proposed, linearized LA is considered as the main baseline for comparison, as this is the LA approach that competes well with Deep-Ensemble for Bayesian NNs. Therefore, it is already a quite strong baseline. We are happy to include more comparisons (ensemble methods and dropout) to the camera-ready version. **Point 3/ Weaknesses** > *"In the experiments, the paper could improve on analyzing the computational complexity, in comparison to the standard Laplace Approximation. ... due to the added complexity of the pipeline."* LA is already simple and cheap. Our method is more flexible and expressive, but it comes with the price of increased computational cost as all differential geometric techniques. However, implementing our method is simple as in practice only an ODE (initial value problem) has to be solved when generating a sample (for scalability and complexity see the general comment). Another benefit of our method is interpretability. Even if linearized LA works well, it is not yet understood theoretically why it performs as such. Instead, our method is interpretable, which can be easily seen from the regression example where the LA is known to behave poorly. We mentioned in the general comment that the computational cost on top of Laplace of our proposed method is $O(SNW)$ where $S$ is the number of step of the ODE solver, $N$ is the number of datapoints and $W$ is the number of parameters in the model. Getting $K$ samples from the posterior is therefore $O(KSNW)$. To briefly explain the complexity cost, at each step of the solver, we need to compute the gradient and the hessian-vector product, which are $O(NW)$, and perform a Runge-Kutta step, which scales linearly with the dimension of the problem, i.e. $O(W)$ [1]. Since the solver would take $S$ steps (RK-45 uses an adaptive step-size therefore this values changes in every ODE), we get $O(SNW)$. 1. Hairer, E., Nørsett, S. P., and Wanner, G. Solving Ordinary Differential Equations I, Nonstiff Problems. Springer, 1993. --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: I would like to thank the authors for the efforts. I have read other reviews as well as related responses. I stand by the current score -- I think clearly analyzing the computational complexity vs empirical gains in performance (my third point) is one missing point in the paper. This is also connected to some of the raised concerns by other reviewers. While authors discuss the computational complexity, it might help to include empirical results, especially for all the comparison results with standard laplace approximation. If the paper gets accepted, I also hope to see more baselines like MC dropout and deep ensembles. --- Reply to Comment 1.1.1: Title: Clarification on suggested empirical study Comment: Thank you again for the useful feedback and comments, and for engaging in the discussion. We are really keen in adding this additional analysis and providing results before the end of the discussion period. However, we first need some clarification on what kind of experiment the reviewer would like us to perform in order to measure the computational complexity vs empirical gains in performance. If we take the test NLL for example, that measures already the quality of the produced uncertainty. Would you like us to also measure the time it takes to get the posterior samples using our method against Laplace? In addition to the complexity, we evaluate empirically the runtime of generating a sample from different model sizes and different dataset sizes (see Fig. 1.b in the PDF attached to the general comment). We would appreciate your clarification so that we can proceed with the additional analysis as soon as possible. Thank you again for your time and consideration.
Summary: The Laplace approximation offers a practical posterior but is limited due to the symmetry of the weight space it is parameterised in. The method proposed to improve posterior quality by adapting the posterior shape through a Riemannian metric that is determined by the log-posterior gradient. Strengths: * Practically The community is interested in using Laplace approximations for Bayesian posterior estimation due to its simplicity and wide applicability. The proposed paper deals with improving the quality of such approximations, which would benefit many. * Methodologically Since the quality of the Laplace approximation is inherent to the space it is parameterised in, it is very interesting to see how Riemmanian geometry of the loss landscapes provides a better understanding of the approximations and provides avenues to improve posterior quality. * Results It is very promising to see that the Riemannian Laplace approximation allows for good posterior fits even when no linearization or prior tuning is being used. Weaknesses: Overall, I think the promise of the paper of using the Riemmannian geometry of the loss surface to improve the posterior quality of Laplace approximation is very promising. My concerns mainly lie in the practicality of the approach in terms of scaling the method to larger models (deeper networks and models with more parameters) and larger datasets. Since the paper presents the method in the context of Bayesian deep learning, the scalability of the method is important. - Scalability to deeper networks The posterior samples from 'POSTERIOR - TWO LAYERS MODEL' of Fig. D.4 seem off. It would be good if the authors addresses how the method would perform for more complex model classes. Does the method break down in this case, or could this potentially be mitigated? - Scalability to models with more parameters As mentioned in the paper, there is a computational cost associated with the growing dimensionality of the parameter space because the number of necessary solver steps increases. I am worried that the method can not be applied to larger deeper NNs as the solutions that are found by ODE solver in practice for larger dimensions will not be of sufficient quality. Since deep neural networks typically consist of many more parameters than the models, this seems like a very big limitation. - Scalability to larger datasets The linear scaling in the number of data points. - Quantitative results and comparisons The method only considers very small models (e.g. single or 2 layer NNs) and small toy problem datasets. This small data regime would allow computing of (close to) exact posteriors, which would allow better quantitative assessment of the posterior found by Riemmanian Laplace approximation. Furthermore, it would be interesting to see larger model and data regimes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) What are the method's memory and computational budget limitations? b) How accurate is the ODE solver when the number of dimensions grows? If the quality of the solutions degrades in higher dimensions, can we expect the method to remain functional for larger models? c) What are current limitations to scale the method to larger models and datasets (e.g. resnets/transformers models at cifar/imagenet scale)? d) MacKay also considered notes that the choice of basis for a Laplace approximation is important in [1], which might be a relevant reference. What are the most important reasons to consider the proposed adaptation using the Riemannian metric over other potential changes of the basis? [1] MacKay, David JC. "Choice of basis for Laplace approximation." Machine learning 33 (1998): 77-86. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned, I think being able to adapt the quality of Laplace approximations by considering geometrical aspects of the loss landscape is very interesting. My concerns are mainly in practicality of the approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer CqAb We thank the reviewer for the positive consideration on our work. We also appreciate the time taken to review our paper. We addressed all concerns you highlight under the Weaknesses and Questions sections. **Point 1 / Weaknesses** > *... Fig. D.4 seem off. It would be good if the authors addresses how the method would perform for more complex model classes. Does the method break down in this case, or could this potentially be mitigated?* In this example, we consider the linearized version of our approach. The data that we use imply a specific loss landscape structure that our samples respect. In the regions where data exist our samples agree with the MAP, while in the other regions the behavior of the samples cannot really be "judged" but potentially the uncertainty. Interestingly, if the prior is not optimized, then our method behaves "as expected" respecting the MAP even in the region without the data (see attached PDF for a plot in that setting). The reason is that with the prior optimized the loss landscape of our linearzied approach has a particular "biased" behavior, which batching seems to help alleviating. **Point 2 / Weaknesses** > *...This small data regime would allow computing of (close to) exact posteriors, which would allow better quantitative assessment...* Thank you for the suggestion. In the attached PDF we added a comparison with the predictive distribution obtained by using HMC on the regression examples. **Point 3 / Weaknesses** > *The method only considers very small models (e.g. single or 2 layer NNs) and small toy problem datasets.* For the experiments we use an off-the-shelf solver for solving ODEs in ~5000 dimensions. We also use it for a LeNet with ~44000 parameters (see attached PDF). We would like to remark that from a differential geometry viewpoint this is already a surprising result. We expect to be able to push further the dimensions by developing suitable ODE solvers. **Point 4 / Questions** > *What are the method's memory and computational budget limitations?* As we explained in the general comment the overhead our method has on top of Laplace approximation is given by the need to solve an ODE to get a sample. If we define $N$ to be the number of datapoints, $W$ to be the number of parameters, then both computing the gradient and the hvp is $O(NW)$. A single step of Runge-Kutta method of order $5(4)$ is $O(W)$ [1]. At each step we have to evaluate the ODE which requires the computation of gradient and hvp. Therefore, if we assume that the solver performs $S$ steps, the computational cost of the solver is $O(SNW)$. 1. Hairer, E., Nørsett, S. P., and Wanner, G. Solving Ordinary Differential Equations I, Nonstiff Problems. Springer, 1993. **Point 5 / Questions** > *How accurate is the ODE solver when the number of dimensions grows?* The accuracy of the solution does not depend on the dimensions of the problem per se, but on the complexity of the ODE problem and the solver. We solve ODEs using a general purpose Numpy ODE solver that is based on a high-accuracy algorithm (Runge-Kutta), where accuracy here means that the solution satisfies perfectly the ODE system for all time steps. Therefore, our solutions are highly accurate, with the price of increased computational complexity. We refer to the general comment for potential improvements in efficiency. We also conjecture that “this type of accuracy” may not be critical for our method. Instead, it may be sufficient and "accurate" for our method the generated geodesic to travel within low loss region and stop when the loss increases. A specialized solver with this characteristics is of particular interest. **Point 6 / Questions** > *What are the most important reasons to consider the proposed adaptation using the Riemannian metric over MacKay's choice of basis for a Laplace approximation?* Thank you for the reference, this is indeed related to our work and we will include it the associated section. MacKay proposed to reparametrize the model/parameter space such that to be as near as possible to a Gaussian. This way, LA will be a good approximation. However, this is not always straightforward, especially in the deep networks regime. Our method is “similar” in spirit, but instead of reparametrizing the parameter space we make the approximation adapt to it by finding several local bases instead of a general one.
Rebuttal 1: Rebuttal: ## General Comment to all reviewers We would like to thank the reviewers for their thoughtful comments, positive considerations and suggestions for improving the paper. We appreciate that you found our work novel, well-written, with potential impact for the community, and inspiring for follow-up works. We make a first general comment about the scalability of our approach and we also reply individually to each reviewer. We are happy to clarify further during the discussion phase if some concerns remain. **Scalability** The scalability of our proposed approach and the applicability to big models was a common topic of discussion among the reviewers, which they acknowledge that we also highlight in the paper as the main limitation of our approach. We will elaborate more about this issue in the camera-ready version based on the following discussion. **Evaluation of the ODE**: - Based on the structure of the Riemannian metric we simplified the original ODE, and its final form allows to apply automatic-differentiation to evaluate it. Otherwise, evaluating the original ODE is rather prohibitive as it needs to compute the Hessian and the Gradient of the loss individually. - Evaluating the ODE needs all the training data points which is prohibited with big data. We proposed the “obvious” trick of using a random (small) batch when solving an ODE, which empirically in some cases even boosts performance motivating further research. Another idea is to evaluate in parallel the ODE using batching, and then, collect all sub-results for the final ODE result. - Other potential ideas is to approximate the Riemannian metric with surrogates leading to simpler ODE systems or providing approximations for the current ODE e.g., considering a diagonal Hessian approximation in the ODE as reviewers BssD and dyBq suggested. **ODE solver**: - For solving the ODE system we use a general purpose Python solver (``scipy.integrate.solve_ivp``) that runs on the CPU. When the solver needs to evaluate the ODE, our automatic-differentiation based approach runs on the GPU, and the result is moved to the CPU (``.detach().cpu().numpy()``) causing a significant overhead. Especially when dimensions increase both the transfer of the data and the computations on the CPU are sub-optimal. Implementing an ODE solver on a suitable automatic-differentiation framework (e.g. JAX) solely running on GPU will dramatically improve performance. - As we know the structure and behavior of our ODE system (geodesics start from low loss which increases along the curve), a potential future work would be to develop solvers that exploit this information. Usually, general purpose solvers aim for accuracy, while in our case even inexact solutions could be potentially useful if computed fast [1]. - The benchmark result (see attached PDF) shows that, in general, increasing data and dimensionality makes the ODE system potentially more expensive to solve. However, there are cases where the solution of the ODE is fast in big models. Related research shows that the loss landscape of overparametrized models exhibits behaviors that may be beneficial to our method e.g., easily connected minima with (nonlinear) continuous paths. Therefore, we believe that even in high dimensions, our method has the potential to scale. It may also inspire new ways to study generalization via the loss landscape. The cost of evaluating a single ODE is $O(SWN)$, where $S$ is the number of steps of the solver, $W$ is the number of model parameters and $N$ is the dataset size. This is the complexity overhead on top of usual Laplace approximation. However, the number of steps the solver needs to converge mainly depends on the complexity of the associated ODE problem, which is defined by the geometry of the loss landscape and the initial velocity. In the attached PDF we include a benchmark synthetic example that shows how the runtime changes with respect to the size of the model and dataset. While increasing the model parameters and dataset size affect $W$ and $N$, it may be the case that ODE systems do not necessarily get harder, therefore the value of $S$ would be small. In other words, evaluating the ODE becomes more expensive as dimensions increase, but perhaps the actual system gets easier to solve. With the current implementation we manage to solve ODEs up to ~44000 dimensions (LeNet), which is surprising from the differential geometry perspective, and show that the method performs well in this regime (see attached PDF), but more sophisticated implementations will speed this up. Moreover, in a practical application samples are generated offline and not during test time. Overall, in this paper we propose a Riemannian extension to Laplace approximation and empirically verify the claims, by considering the complete formulation and relying on pre-existing tools for the implementation. As reviewers acknowledge there is a spectrum of potential research ideas in between for either improving the efficiency of our approach or for new techniques based on the same differential geometric principles. This is in spirit related to NeuralODEs where the original paper proposed the main concept and follow-up works improve parts of it, for example, specialized ODE integrators [2,3,4,5,6]. **References** 1. "Fast and robust shortest paths on manifolds learned from data". G. Arvanitidis et al., AISTATS 2019 2. "Opening the blackbox: accelerating Neural Differential Equations by regularizing internal solver heuristics". A. Pal et al., ICML 2021 3. "On numerical integration in Neural Ordinary Differential Equations". A. Zhu et al., ICML 2022 3. "On robustness of Neural Ordinary Differential Equations". H. Yan et al., arXiv 2020 5. "MALI: A memory efficient and reverse accurate integrator for Neural ODEs". J. Zhuang et al., arXiv 2021 6. "STEER: Simple temporal regularization for Neural ODEs". A. Ghosh et al., arXiv 2020 Pdf: /pdf/a95f86a0a4d6df740c35cf3111adbaca4318c621.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Asymmetric Certified Robustness via Feature-Convex Neural Networks
Accept (poster)
Summary: This paper is based on the following elegant observation: Consider the case of binary classification in which we learn a function $f: \mathbb R^e\to \mathbb R$ and classify a point based on thresh-holding $f$ at 0. Assume $f(x)=g(\phi(x))$ where $f:\mathbb R^m \to \mathbb R$ is convex and $\phi$ is Lipschitz. Then using the Lipschitz consonant of $\phi$ and the fact that $g(y)\geq g(\phi(x))+\nabla g(\phi(x))\cdot (h-\phi(x))$ for any subgradient of $g$, one can easily compute a robustness certification for $f$. The authors then argue that this paradigm is fairly applicable to adversarial binary classification based on the following observations: 1) If $m$ is larger than the number of data points and $\phi$ is the identity, then there is a convex function achieving perfect accuracy. 2) There are many adversarial classification tasks involving asymmetric binary classification such as malware detection and spam detection Strengths: - computation of a robust certificate for this method is much faster than competing methods and achieves comparable performance - formalizes the fact that in many real world applications, adversarial classification is an asymmetric problem - exposition is excellent and bibliography is very thorough Weaknesses: - Binary classification problems are often handled by SVMs, for which a robust certification is easy to compute. These models are simpler and easier to train than neural nets. Furthermore, one expects some robustness to arise from these models as they minimize the maximum margin. This paper does not compare accuracy or robustness with SVMs. For instance, due to Corrollary 3.8, I would expect SVMs to perform well on the CIFAR 10 example. Can you find an example where your method does better than an SVM in terms of either accuracy or robustness? - Consider $\hat g$ as defined in definition 2.2 If $b_2,\ldots b_\ell\geq 0$, the last $\ell-1$ layers reduce to a linear function. This observation suggests that depth for convex neural nets has fairly different effects than depth for standard neural nets. This observation further suggests that to achieve an expression $\hat g$, $\ell$ would need to be quite large. Can you elaborate on your choice of $\ell$? - There are many instances for which the dimension of the dataset $d$ is less than the number of data points. One would expect that using a non-identity $\phi$ in these cases would be advantageous. The paper does not discuss how to choose $\phi$ in this case - Comment: I would call "Corollary 3.8" a "Fact" rather than a "Corollary". Optimization in finite precision arithmetic always has errors, which is weaker than a formal proof. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the questions in the first two bullets of "Weaknesses". A convincing response to the first bullet could significantly change my opinion on the rating of this paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: See the first two bullets under "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind complement on the exposition of our paper. We appreciate your constructive suggestions, which we address below. 1. "Can you find an example where your method does better than an SVM in terms of either accuracy or robustness?": Certainly, we have shared the relevant script with the AC through an anonymized link (as per the rebuttal guidelines). For the CIFAR-10 task that you mentioned, a linear SVM only achieves a 54.8% clean accuracy, compared to 68% for our method (with regularization). We also experimented with adding the same Lipschitz-continuous feature map that we used for our architecture (the concatenation $x\mapsto (x,|x|)$, after shifting the images to be in the $[-0.5, 0.5]$ range). This only marginally improved the clean accuracy of the SVM to 55.6%. Certification performance is also roughly an order of magnitude worse than our method across all norms---e.g., the maximum $\ell_1$-norm certified radius was just 2.93, compared to over 30 for our method. This reflects the fact that convex classifiers are significantly more flexibile than linear classifiers, while retaining fast certification computation by using linear underapproximators. Note that Corollary 3.8 applies to the class of input-convex ReLU neural networks; it does not imply that an SVM can achieve perfect training accuracy, since SVMs constitute only a small subset of all possible convex classifiers. For an SVM to achieve perfect training accuracy, the dataset would need to be linearly separable, which is a much stronger condition than the convex separability in both directions (cats-dogs and dogs-cats) that we show (e.g., consider class 1 points at $(-1,0),(1,0)\in\mathbb{R}^d$ and class 2 points at $(0,-1),(0,1)\in\mathbb{R}^2$, which are convexly separable in both directions but not linearly separable). Here are the clean accuracies and maximum certified radii for the SVM on the CIFAR-10 cats-dogs task, outputted from the script: **With no feature map:** Clean accuracy: 0.548 l1: 2.930618825346973 l2: 0.2083568160433327 linf: 0.004723921349099762 **With the $(x, |x|)$ feature map:** Clean accuracy: 0.556 l1: 1.4921445716893802 l2: 0.11696300733917692 linf: 0.002664742804160659 2. "Consider $\hat{g}$ as defined in definition 2.2...": The biases $b^{(l)}$ are not constrained to be nonnegative, and even if they are nonnegative, the $l$th layer may still be nonlinear in the "passthrough" $x^{(0)}$, since $C^{(l)}x^{(0)}$ is fed into the activation with $C^{(l)}$ having possibly negative elements. This nonlinearity then propagates into all subsequent layers. We selected our layer count $L$ empirically based on certification performance, finding that relatively shallow networks tend to suffice (i.e., $5$ layers for CIFAR-10 cats-dogs). We presume that this is tied to the documented tendency for ICNNs to avoid overfitting (see [60] in our manuscript), although a more thorough investigation of this phenomenom is better suited for future work. 3. "The paper does not discuss how to choose $\phi$ in this case": The relationship between dataset dimensionality and the number of datapoints is quite complex, and our feeling is that the characteristics of the dataset distribution are practically more important than its dimension. For example, many noncomplex datasets in lower dimensions might be well classified with a simple classifier, and don't need any feature map (i.e., the MNIST 3-8 dataset). On the other hand, the higher dimensional CIFAR cats-dogs dataset benefited significantly from a feature map (see appendix G.2). While our Theorem 3.9 does begin to look at the interplay of dimension and separability, a more empirical investigation of this phenomenon and appropriate feature maps is better suited for future work. 4. "Optimization in finite precision arithmetic always has errors, which is weaker than a formal proof": Your point on finite precision is a good one. Thus, we have re-labeled the result as a "Fact," per your suggestion. Q1. "A convincing response to the first bullet could significantly change my opinion on the rating...": Thank you for your openness. We have addressed each of your raised points, and in particular demonstrated the superiority of our method over SVMs, as per your first bullet point. We hope that you find our updates sufficient to significantly change your rating, and appreciate your help in enhancing the paper. --- Rebuttal Comment 1.1: Title: Nice SVM results! Comment: 1. Your comparison with SVMs is compelling. I have updated my score. At the same time, the clean accuracy of your method is much closer to an SVM than a neural net. Sorry, my mistake on Corollary 3.8. Consider including that example of points in $\mathbb R^2$ that are convexly separable but not linearly separable in your paper. 2. Great! I think you should also include this discussion in you paper. This has intuitively helped me understand the sources of non-linearity in your nets.
Summary: This paper tackles the asymmetric nature of classification and adversarial attacks intrinsic in most real-world scenarios that have a live and motivated attacker: that attacks are unidirectional, and proposes a general-purpose technique for specifying a certifiably robust defense in such scenarios. This is done by using an Input Convex Neural Network (ICNN) to guarantee the convexity of the function $g(x)$, and then double-backprop is applied to get the gradient of the ICNN with respect to its input to compute the certificate radius. ------- I think my concerns are mostly addressed. I'm waffling between 6/7, so at least updating my minimum confidence. Strengths: 1. Valuable theoretical contribution to how adversarial machine learning works in real-life, over unmotivated fears about a computer-vision boogie man. 2. Comparisons against many alternative certifiable defenses, with significant improvement in some cases. 3. Should be faster, but such results are obscured. Weaknesses: 1. The experimental section is woefully presented and obscures the nature of the benefit, abdicating the setup/explanation to the appendix is not acceptable in my opinion. 2. The mathematical presentation (additionally not helped by the NeurIPS template) is hard to follow, e.g. the setup for the reader in section 1.3 is crammed into a tightly packed paragraph to hit the page limit, and the readability suffers. Technical Quality: 4 excellent Clarity: 1 poor Questions for Authors: 1. How much faster is your approach at certifying a radius against each other classifier? This should be in the experiments section to justify the value of the method. 2. Is there a predictive difference in your approach verses other methods (I suspect yes, e.g., randomized smoothing)? See Q1, it should be in Experiments. 3. I read the appendix and it says the "pixels" in malimage are [0, 255] encoded. Wouldn't that mean 10^4/255 = 39 pixels can be altered? It looks like others are right around the one-pixel attack radius in this case. 4. Why not use the neural network approach/data from _SOREL-20M: A large scale benchmark dataset for malicious PE detection_? The MalImage dataset is highly flawed, taking an approach ("malware images") that has been long known to be errant (_Malware Detection by Eating a Whole EXE_) and has significant issues in failing to account for biases that occur in the malware space (_TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time_). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 1 poor Contribution: 4 excellent Limitations: Limitations are maybe sufficiently addressed, see questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and questions. We have addressed them below, and revised the manuscript accordingly. 1. "The experimental section...": As mentioned in our response to Reviewer qP9X, we have revised the Experiments section to satisfy your suggestions. In particular, we have moved the most pertinent details of the experimental setup from Appendix E to Section 4, and we have explicitly defined the keywords in the legends, including the baselines (with citations) as well as the clean accuracy numbers. We believe these edits make the presentation of our experimental results self-contained and much clearer to the reader. Please find the revised section in the general response. 2. "The mathematical presentation... is hard to follow, e.g.... section 1.3": We made the following revisions to the mathematical presentation according to your comments and the suggestions from Reviewer qP9X: a) We have moved Propositions 3.4 and 3.5 (that characterize the decision region geometry) to Appendix C, for two reasons: i) To more clearly highlight the important theorems and avoid a long sequence of mathematically dense results. ii) To make room for our experimental details added to Section 4. In place of these propositions, we have modified lines 250--254 with the more easily understood description: "Although low-dimensional intuition may cause concerns regarding the convex separability of sets of binary-labeled data, we will soon see in Corollary 3.8 and Theorem 3.9 that, even the CIFAR-10 cats and dogs classes are convexly separable, as well as relatively unstructured binary datasets in high dimensions (with high probability). This convex separability of datasets is an highly advantageous property, as input-convex classifiers may always perfectly fit such data, and conversely, input-convex classifiers have interpretable decision regions consisting of a convex set and its complement (the latter of which is not necessarily true for our more general feature-convex architectures with $\varphi\ne\text{Id}$). We formally state and prove these geometric characterizations in Propositions 3.4 and 3.5 of Appendix C." b) We have updated the first paragraph of Section 3 to clarify the importance of the presented results: "We present our main theoretical results in this section. First, we derive asymmetric robustness certificates (Theorem 3.1) for our feature-convex classifiers in Section 3.1. Then, in Section 3.2, we introduce the notion of convexly separable sets in order to theoretically characterize the representation power of our classifiers. Our primary representation results give a universal function approximation theorem for our classifiers with $\varphi=\text{Id}$ and ReLU activation functions (Theorem 3.6) and show that such classifiers can perfectly fit convexly separable datasets (Theorem 3.7), including the CIFAR-10 cats-dogs training data (Corollary 3.8). We also show that this strong learning capacity generalizes by proving that feature-convex classifiers can perfectly fit high-dimensional uniformly distributed data with high probability (Theorem 3.9)." Since the notations introduced in Section 1.3 are used throughout the paper, we find it best to define them up front and in their own short section for reference. We are open to any suggestions you may have for enhancing the readibility of Section 1.3. Q1. "How much faster is your approach...": To make the runtime discussion in Section 4 more explicit, we ran a quick benchmark on the CIFAR cats-dogs dataset: | Method | Certification time (s) | | ------ | ------------------ | | FCNN | 0.0021 | | RS Laplace | 5.95 | | RS Gauss | 5.89 | | RS Uniform | 5.91 | | Splitting | 0.08 | | Cayley | 0.052 | All RS methods use 100,000 samples. $\alpha,\beta$-CROWN is more difficult to compare directly as runtime is computed per property; but generally it is far more computationally expensive and takes on the order of tens of seconds. We also note that the splitting method scales linearly with noise and image size and thus takes on the order of several minutes per sample on Malimg. Q2. "Is there a predictive difference...": Yes, the predictions and certificates are fundamentally different in nature between the three randomized smoothing baselines and deterministic methods (including our deterministic method). We have added the following sentence to the Experiments section to remind the reader of this fact and more clearly categorize the different methods in consideration: "Notice that the three randomized smoothing baselines have fundamentally different predictions and certificates than the deterministic methods (including ours), namely, the predictions are random and the certificates hold only with high probability." Q3. "Wouldn't that mean 10^4/255 = 39 pixels can be altered?": We normalize the pixel values to $[0, 1]$. We have added the clarifying sentence "All pixel values are normalized into the interval $[0,1]$." to Section 4 of the manuscript. Q4. "Why not use ... SOREL-20M...": The baseline network proposed in SOREL-20M operates on features extracted from the binary. An architecture that certifies robustness to changes in features is difficult to analyze, as the implications for robustness in bit-space (which the adversary can directly manipulate) are unclear. While more sophisticated approaches for classifying directly on bit-space exist (see [25] in our manuscript), we consider these outside the aim of our work. Our primary aim is to introduce a more general framework of certified asymmetric robustness (not just for malware). We used Malimg to give an intuitive practical application that naturally fits within this asymmetric framework; we are making no claims regarding whether the Malimg approach is good or bad for the specific problem of malware classification. --- Rebuttal Comment 1.1: Comment: >Q1. "How much faster is your approach...": If you could re-run and give a range of Crown timings for the actual image data used, if it is a matter of scaling to observe RS speedup, please do so. It makes the argument far stronger. I think it is acceptable if the result is measured against a few samples in terms of rebuttal, but you would have plenty of time till camera-ready to get that number on the whole test set. > Malimg to give an intuitive practical application that naturally fits within this asymmetric framework; we are making no claims regarding whether the Malimg approach is good or bad for the specific problem of malware classification. I think a warning to the reader that Malimg is done only for illustration, and that the Malimg approach isn't advised for actual deployment (w/ citation), would satisfy my concern with the rest of the explanation. --- Reply to Comment 1.1.1: Comment: *Crown timings:* As a point of reference, we ran a quick $\alpha,\beta$-CROWN experiment on 10 CIFAR cats-dogs images. Verifying a particular property (norm + epsilon) takes on average 17.48 seconds per sample on our hardware. Note that this is actually understates the complexity of computing the certified radius of a particular sample. $\alpha,\beta$-CROWN only provides a binary true/false signal of whether a particular radius is certified; to find the true largest certified radius for a sample would require an iterative scheme (e.g., binary search). Our method directly outputs the certified radius. We agree that a more explicit discussion of runtime would highlight the advantages of our method more clearly. We appreciate the suggestion and will include a runtime experiment on the whole test set in the camera ready version. *Malimg:* We are happy to put this remark in the camera ready version. Please let us know if you have any remaining concerns or suggestions regarding our work!
Summary: This paper considers a specific problem in binary classification task, where one class is recognized as a ‘sensitive’ class, which needs to be certified for robustness. The authors propose a special network called feature-convex neural network, which combines a Lipschitz network and a convex ReLU network. Expressive power of input-convex networks for convexly separable sets are provided. Experiments show that proposed networks achieve better performance on $l_1$ robustness compared to other certified robustness methods. Strengths: 1. The paper is well-written and the problem and proposed method are clear. 2. The proposed problem and method are new. 3. The theoretical results are solid. Weaknesses: 1. The proposed feature-convex network may limited classification power to be practically useful. The lower training accuracy on larger dataset like CIFAR-10 may imply that this architecture will be harder to apply in more complex real-world datasets. 2. The verification of convex separability of CIFAR-10 cats vs dogs is confusing. The authors use convex optimization showing that certain cat image is hard to be recovered from the convex combination of dog images. Is this the sufficient evidence for convex separability? As authors mentioned, input-convex classifier is hard to fit the CIFAR-10 cats vs dogs data, which only achieves $70\%+$ training accuracy. To make convex separability convincing, one may at least design an input-convex network architecture to achieve higher training accuracy (even without generalization ability). 3. The experimental results are a little weak. The proposed feature-convex classifier only shows decent performance on $l_1$ robustness, but is suboptimal in $l_2$ and $l_{\infty}$ robustness. As randomized smoothing is a more general certified method which is not designed for asymmetric binary classification, the suboptimality of feature-convex classifier shows the limited power of the proposed network. 4. A two-layers neural network is used to evaluate the certified power of $\alpha,\beta$-CROWN. This seems to be a weaker baseline, as the conv-small net in $\alpha,\beta$-CROWN uses a four-layer convolutional network [1]. 5. The applications mentioned in the paper are mostly for imbalanced dataset (one class has more data than another), but the experiments are mostly for balanced dataset. [1] S Wang, H Zhang, K Xu, X Lin, S Jana, C J Hsieh, and J Z Kolter. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In NeurIPS 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. When selecting the parameter $\tau$ to shift the classification threshold, why use the balanced accuracy $\alpha_2(\tau)=\alpha_1(\tau)$, but not the commonly-used criterion in binary classification, F1-score or AUROC? Does there exist previous work using balanced accuracy for (asymmetric) binary classification? 2. Can your method be extended to multi-class classification tasks? 3. A very simple Lipschitz feature extractor $g$ is used in the proposed classifier. Can a more complex Lipschitz feature extractor be used to keep both certified robustness and accuracy high? 4. In the proposed application on spam classification, why attackers only attempt to fool the classifier toward the “not-spam” class? The converse attack is more detrimental if the classifier recognizes a not-spam, important email as a spam one. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive feedback on our paper's presentation, proposed problem/approach, and theory. 1. "The proposed feature-convex network may limit classification power...": In practice, we find our clean accuracies to be on par with the state-of-the-art robust classification baseline methods at a comparable level of certification performance (c.f., Figure 2). Corollary 3.8 further shows that our model is theoretically capable of attaining perfect training accuracy on CIFAR cats-dogs. Under standard training, our learned model on CIFAR achieves 73.4% accuracy, emphasizing that there is significant room for improvement. We are thus limited by standard architecture designs and training algorithms when applying them to input-convex models--not the capabilities of ICNNs themselves. We therefore hope that this result motivates future research on new algorithms that are specially tailored to FCNNs/ICNNS. We have revised the manuscript to more clearly pose this open problem to the readers by adding the following "Open Problem" statement at line 292: "**Open Problem 3.9.** Train an input-convex ReLU neural network that achieves 100% training accuracy on the unaugmented CIFAR-10 cats-versus-dogs dataset." 2. "The verification of convex separability of CIFAR-10 cats vs dogs is confusing": We use convex optimization to verify that, if you choose any cat image, it cannot be represented as a convex combination of the dog images. Hence, the set of cat images (call it, $X_1$) lies completely outside the convex hull of the dog images (call dog images $X_2$ and the convex hull of dog images $X$). This is precisely what it means for cats-dogs to be convexly separable: namely, that $X_2\subseteq X$ and $X_1\subseteq \mathbb{R}^d \setminus X$. This analysis of the dataset is therefore sufficient to conclude convex separability. That our learned ICNN (using standard training) achieves 73.4% training accuracy does not alter the fact that cats-dogs is convexly separable, as the above experiment already verifies the separability. As discussed above, this points to the need for novel ICNN designs and training algorithms to unlock their full potential. 3. "The experimental results are a little weak": We emphasize that our approach features several substantial benefits over randomized smoothing. Unlike our deterministic certificates, RS is inherently probabilistic (since it is based on an empirical expectation): there is always a strictly positive probability of a RS classifier producing a prediction that violates its own certificates. Furthermore, randomized smoothing approaches are highly computationally intensive, taking on the order of seconds while our certificates are closed-form and can be computed in milliseconds (~$1000\times$ speedup). As our certificates are generally comparable (for $\ell_2$/$\ell_\infty$-norms) or decidedly superior ($\ell_1$-norm), we consider these advantages to be significant enough to highlight the promise of our method. 4. "A two-layers neural network is used to evaluate ... $\alpha,\beta$-CROWN": We used a smaller network for $\alpha,\beta$-CROWN to increase the verification performance of the baseline, as larger networks tended to be more computationally intensive to certify at a particular radius and runtime. 5. "The applications mentioned in the paper are mostly for imbalanced dataset": We focus our evaluation primarily on balanced datasets to avoid added analysis complexity, as the work is already quite lengthy. We note that our framework and method apply equally to the balanced and unbalanced scenarios. Q1. "...why use the balanced accuracy...": We balance the class accuracies in order to provide a fair comparison of certified accuracy curves across different methods. Otherwise, consider Method A and Method B, where Method A has a superior certified accuracy curve and Method B has a higher clean accuracy for the non-sensitive class; it would be difficult to compare the two methods' certified accuracy curves as it is unclear how much of Method A's advantage came from compromising on non-sensitive class accuracy. Thus, we feel our approach is best for providing directly interpretable certified accuracy curves in the asymmetric setting. Note that in practice a user can adjust the threshold however they see fit--we chose this approach to fascilitate fair and standardized comparisons between methods. Q2. "Can your method be extended to multi-class...": Yes. On line 168 in the main paper, we refer to the supplemental material where we propose two ways to extend our method to multi-class settings. Our multi-class generalizations allow for efficient and closed-form asymmetric robustness certificates either for one sensitive class, or one "group" of sensitive classes. Q3. "Can a more complex Lipschitz feature extractor be used...": Any feature map can be used so long as it provides a bounded Lipschitz constant. However, more complicated feature maps may come at the expense of a larger Lipschitz constant, resulting in smaller robustness certificates. Therefore, the feature map should be kept as simple as possible to assist with (ideally closed-form) computation of a small Lipschitz constant, but sophisticated enough to make the data convexly separable. Designing and/or learning low-Lipschitz yet high-performing feature maps is left as future work. Q4. "...why attackers only attempt to fool the classifier toward the not-spam class?": By definition, a message generated by an attacker is spam, and therefore the only way they can possibly fool the classifier is if they craft a message that is classified as non-spam. Our natural goal is therefore to certify that an adversary cannot "lightly edit" a spam email to make it look to us like non-spam. On the other hand, your proposed situation is not adversarial in nature. An adversary would not craft genuinely important non-spam emails while trying to fool the classifier into thinking that they are spam. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation. I will keep my score due to the limitation on the training accuracy.
Summary: The paper is about certified robustness of neural networks. In particular the authors explore the concept that there is typically a one-sided attack that needs to be certified since adversaries have certain goals that only work in certain directions (e.g., classify a spam email as ham). The authors call this framework "asymmetric robustness certification problem" and approach a solution to this problem using a feature-convex neural network architecture. The analysis is upon ReLU activation functions where the authors formalize and prove several results that are related to the geometry/convexity of the input data and how this can translate into the existence (not the identification) of a neural network that has perfect accuracy on the dataset. Of course, it is a neural network at the end of the day and can be applied anywhere and observe its performance. In this direction, eventually the authors evaluate their method on four different datasets against several baselines. **After Rebuttal:** I have read the reviews of others as well as the response by the authors. I am happy with the answers provided by the authors as well as their willingness to improve the presentation of the paper. I will upgrade my presentation score from 1 to 2 and also increase my overall score from borderline accept to weak accept. Strengths: + A new approach for certified robustness in a framework that deserves attention. + The proposed method is backed up with theoretical guarantees. + Experimental results on different datasets show that the method behaves well. Weaknesses: I think the main issue of the paper is the presentation of the results. Granted the authors have put a lot of effort in this paper as it can be seen from the full paper. Nevertheless: + The authors use in many situations very long sentences and this makes it hard to follow their work, their descriptions, and ultimately give the appropriate merit to their work. + In addition to that, Section 3 and especially Section 3.2, is hard to follow as we have a sequence of definitions, propositions, and theorems that the authors prove as part theoretically justifying their method. + Not only that, but it is hard to understand what is important and what is not for the story that the authors want to discuss. + In Section 4 where we find the experiments, we have Figure 2 where the authors' method is compared against baseline methods. However, neither in the caption of the figure and the subfigures, nor in the actual text of the paper do we get to see what the other baselines are. There are some keywords in the legends of the different subfigures, but no explanation. Most likely RS X means randomized smoothing using noise X (but this is left as an exercise to the reader). No more information about the rest of the cases is given either, neither are papers cited for these methods that are used as baselines (at least not in that section). Also, there are some some percentage points next to each method which should present (from the context) that these values correspond to accuracy without any perturbations (and hence the "clean" word appearing next to these numbers) but really, these things need to be spelled out on the paper. One cannot really throw some figures on a paper, with keywords resembling names of certain methods but refuse to explain what these methods really are. After looking into the appendix, indeed the authors there explain the names of the different methods and cite the relevant work, but this is happening 14 pages after the figure was presented to the reader. I am sorry, but this is not how papers are read. I am really torn with this paper and I would like to see the views of the other reviewers. Nevertheless, I appreciate theoretical results and I am leaning more towards acceptance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1. Are ReLU functions instrumental in the results that you prove? Can these results be extended to other activation functions? Q2. Apart from the experimental proof that cats and dogs from CIFAR-10 is convexly separable, were you aware of any classification datasets that are convexly separable and could motivate the work that you did? Of course, this is not to diminish the work that you did; I am just asking. Q3. At the top of page 4 we see that a neural network is defined as a mapping $f \colon R^d \times R^n \rightarrow R$. Can you explain why your input is decomposed to $(x, y)$, with $x\in R^d$ and $y\in R^n$, and why we have R in the output? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive suggestions. We have addressed all of your comments and questions below, and revised the manuscript accordingly. 1. "The authors use... long sentences": We have identified and revised some of our lengthier sentences: "Specifically, we assume..." (line 59); "We characterize..." (line 94); "In contrast..." (line 130); "While high..." (line 286). Please let us know if there were other areas of the text that you found convoluted. 2/3. "Section 3 and especially Section 3.2, is hard to follow"... "it is hard to understand what is important": We appreciate your feedback on the framing of our theoretical results. As reviewer 44oN raised similar concerns, please see our response and alterations in point 2 for Reviewer 44oN. If you have other suggestions on how we might adjust presentation and/or language to clarify our main contributions to the reader, we would appreciate hearing them. 4. "In Section 4... we [do not] get to see what the other baselines are": We have adjusted the experiments section according to each of your points brought up. In particular, we have moved the most pertinent details of the experiments from Appendix E to Section 4, and we have explicitly defined the keywords in the legends, including the baselines (with citations) as well as the clean accuracy numbers. We believe that, in following your suggestions, these edits make the presentation of our experimental results much clearer to the reader. Please find the revised text in the general response. Q1. "Are ReLU functions instrumental in the results that you prove?": Only Theorems 3.6, 3.7, and the second part of Theorem 3.9 rely on ReLU activation functions, since we prove our uniform approximation theorem (Theorem 3.6) in terms of ReLU activations. In particular, our primary robustness certificate (Theorem 3.1) holds for general feature-convex models, which may be constructed using any convex nondecreasing activation functions. Notice also that the primary result of Theorem 3.9 (the fact that high-dimensional uniform datasets are convexly separable with high probability) is unrelated to architecture choice. Q2. "...were you aware of any classification datasets that are convexly separable?": Yes. As mentioned in Appendix D, "Yousefzadeh [74] and Balestriero et al. [9] showed a related empirical result for CIFAR-10, namely, that no test set image can be reconstructed as a convex combination of training set images." It is important to note that their result does not show the convex separability of two classes of training images, and therefore does not allow one to conclude that a feature-convex classifier can achieve perfect training accuracy. Therefore, our experiment on the cats-dogs training dataset is required in order for us to come to the conclusion of Corollary 3.8. Q3. "Can you explain why your input is decomposed...": The work being cited there (Amos et al. [2]) proposed input-convex neural networks as a means to build a model $f(x,y)$ that is convex in the variable $y$, but possibly nonconvex in $x$. They use such a variable decomposition for the specific purposes of optimization-based inference with a real-valued output, i.e., mapping an input $x$ to an ouput defined by the convex optimization problem $\arg\min_{y} f(x,y) \in \mathbb{R}$. Since, in their application of optimization-based inference, one of the inputs to $f$ is being "optimized out," they need to include an auxiliary/decomposed input so that there is still an input $x$ to be fed to $\arg\min_y f(x,y)$. On the other hand, our work, as well as many other works that use input-convexity (e.g., [16,17,48,75,79] in our manuscript), are not interested in optimization-based inference, but rather seek to exploit input-convexity for other reasons, so there is no part of the model input being "optimized out." In our case, we exploit input convexity for purposes of model robustness. Therefore, our work (and the others listed) have no need for an auxiliary input or an input decomposition. That is, we simply use models $g(x)$ that are input-convex in the entire variable $x$. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I have read the reviews of others as well as the response by the authors. I am happy with the answers provided by the authors as well as their willingness to improve the presentation of the paper. I will upgrade my presentation score from 1 to 2 and also increase my overall score from borderline accept to weak accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their time and constructive comments, as well as for taking our updates into consideration for their revised score.
Rebuttal 1: Rebuttal: We sincerely thank the Reviewers for their insightful comments and valuable suggestions. We are happy to see that 3/4 reviews are generally positive on our work, with primarily presentation-focused concerns that we address below. Reviewer g6eM has indicated their willingness to update their score with a convincing SVM comparison, which we have also provided. Please see our individual responses to each Reviewer, where we address each point raised and highlight the corresponding revisions we have made to the manuscript. Here, we briefly describe the main revisions: 1. Mathematical presentation: we have edited and added descriptions in Section 3 to ensure the theoretical results are easily understood by the reader, and to more clearly illustrate the importance of such results and how they play into the overall story of the paper. In doing so, we moved Propositions 3.4 and 3.5 to Appendix C, and added in their place a more easily readable description of what they say. 2. Experiments: we have moved the most pertinent details of the experiments from Appendix E to Section 4, explicitly describing the datasets, baselines (with citations), and our model's architecture, in addition to defining the keywords in the legends and the clean accuracy numbers reported in Figure 2. We believe that, in following the Reviewers' suggestions, these edits make the presentation and contextualization of our experimental results much clearer to the reader and aid in justifying our work. _Experimental section addition._ "This section compares our feature-convex classifiers against a variety of state-of-the-art baselines in the asymmetric setting. Before discussing the results, we briefly describe the datasets, baselines, and architectures used. For a more in-depth description and hyperparameter details, see Appendix E. **Datasets.** We evaluate our approach using four datasets. First, we consider distinguishing between $28\times 28$ greyscale MNIST digits $3$ and $8$ [37], which are generally more visually similar and challenging to distinguish than other digit pairs. Next, we consider identifying malware from the "Allaple.A" class in the Malimg dataset of $512\times 512$ bytewise encodings of malware [51]. We then consider distinguishing between shirts and T-shirts in the Fashion-MNIST dataset of $28\times 28$ greyscale images [70], which tend to be the hardest classes to distinguish [33]. Finally, we consider the $32\times 32$ RGB CIFAR-10 cat and dog images since they are again relatively difficult to distinguish [26,44,30]. The latter two datasets can be considered as our more challenging settings. All pixel values are normalized into the interval $[0,1]$. **Baseline Methods.** We consider several state-of-the-art randomized and deterministic baselines. For all datasets, we evaluate the randomized smoothing certificates of [72] for the Gaussian, Laplacian, and uniform distributions trained with noise augmentation (denoted RS Gaussian, RS Laplacian, and RS Uniform, respectively), as well as the deterministic bound propagation framework $\alpha,\beta$-CROWN [66], which is scatter plotted since certification is only reported as a binary answer at a given radius. We also evaluate, when applicable, deterministic certified methods for each norm ball. These include the splitting-noise $\ell_1$-certificates from [40] (denoted Splitting), the orthogonality-based $\ell_2$-certificates from [63] (denoted Cayley), and the $\ell_{\infty}$-distance-based $\ell_{\infty}$-certificates from [77] (denoted $\ell_\infty$-Net). The last two deterministic methods are not evaluated on the large-scale Malimg dataset due to their prohibitive runtime. Furthermore, the $\ell_{\infty}$-Net was unable to significantly outperform a random classifier on the CIFAR-10 cats-dogs dataset, and is therefore only included in the MNIST 3-8 and Fashion-MNIST shirts experiments. Notice that the three randomized smoothing baselines have fundamentally different predictions and certificates than the deterministic methods (including ours), namely, the predictions are random and the certificates hold only with high probability. **Feature-Convex Architecture.** Our simple experiments (MNIST 3-8 and Malimg) require no feature map to achieve high accuracy ($\varphi=\text{Id}$). The Fashion-MNIST shirts dataset also benefited minimally from the feature map inclusion. For the CIFAR-10 cats-dogs task, we let our feature map be the concatenation $\varphi(x)=(x-\mu,|x-\mu|)$, as motivated by Appendix B, where $\mu$ is the channel-wise dataset mean (e.g., size $3$ for an RGB image) broadcasted to the appropriate dimensions. Our MNIST 3-8 and Malimg architecture then consists of a simple two-hidden-layer input-convex multilayer perceptron with $(n_1,n_2)=(200,50)$ hidden features, ReLU nonlinearities, and passthrough weights. For the Fashion-MNIST shirts (CIFAR-10 cats-dogs, resp.) dataset, we use a convex ConvNet architecture consisting of $3$ ($5$, resp.) convolutional, BatchNorm, and ReLU layers. All models are trained using SGD on the standard binary cross entropy loss with Jacobian regularization, and clean accuracies are balanced as described in Section 1.1 and Appendix E.4 to ensure a fair comparison of different robustness certificates. **Results and Discussion.** Experimental results for $\ell_1$-norm certification are reported in Figure 2, where our feature-convex classifier radii, denoted by Convex*, are similar or better than all other baselines across all datasets. Also reported is each method's clean test accuracy without any attacks, denoted by "clean."..."
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective
Accept (poster)
Summary: The paper presents FourierGNN for multivariate time series forecasting from a pure graph perspective, performing matrix multiplications in Fourier space, which has not been investigated so far. They design a new hypervariate graph structure to consider spatiotemporal dynamics unitedly and reformulate the graph operations on the hypervariate graph in Fourier space. Extensive experiments have demonstrated superior performance with higher efficiency and fewer parameters over state-of-the-art methods. Strengths: 1. The paper is overall well-written. I think the contribution and novelty of the proposed work are clearly presented. 2. The idea of directly applying graph networks for multivariate time series forecasting seems novel and exciting. The hypervariate graph structure can encode spatiotemporal dependencies unitedly, and the visualizations in the experiments have also verified its advantages. 3. The authors argue that performing the multiplication between the input and the proposed FGO in Fourier space is equivalent to a graph convolution in the time domain, while the multiplications in Fourier space have lower complexity. The argument is proved by the authors and seems very meaningful. I think this argument provides a new path for conducting graph operations in Fourier space. 4. The effectiveness is validated through extensive experiments across seven real-world datasets. Sufficient analysis including efficiency analysis and ablation study is also provided. Weaknesses: 1. In the efficiency analysis section, the authors assess the overall efficiency of the forecasting process. To specifically evaluate the efficiency of FourierGNN, it would be more appropriate to focus on comparing the number of parameters involved in the graph operations with those of the baselines. 2. The MAPE (Mean Absolute Percentage Error) results on some datasets (e.g., Traffic, COVID-19) do not achieve state-of-the-art performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: FourierGNN demonstrates impressive performance across multiple datasets, with notable success observed on the COVID dataset. Could you explain the reasons that contribute to the superior performance of FourierGNN on the COVID dataset compared to other datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and valuable suggestions. In the following, we provide a detailed response to address all of your concerns. **W1** Since it is difficult and unfair to analyze the time complexity and parameter volumes of comparative methods that have different architectures, we report the empirical training time and parameter volumes of FourierGNN and the GNN-based baselines in the efficiency analysis. Meanwhile, we provide theoretical efficiency analysis between FGO/FourierGNN and graph convolution/GCN (a commonly-used module) in the analysis of time complexity. The efficiency analysis in **Model Analysis in Section 5.3** aims to investigate the parameter volumes and training time costs of FourierGNN and its comparative baselines. The comparison results show: 1) FourierGNN exhibits the lowest volume of parameters among the comparative models; 2) FourierGNN runs much faster than all baseline models. These results demonstrate the efficiency of our proposed FourierGNN compared with state-of-the-art GNN-based models. Regarding the time complexity of the graph operators in FourierGNN, we have discussed and provided the time complexity of FGO in **Complexity Analysis (Lines 203-209)**. Specifically, the computational time complexity of FGO is $\mathcal{O}(nd\operatorname{log}n+nd^2)$, while the time complexity of the equivalent graph convolution in the time domain, i.e., $AXW$, is $\mathcal{O}(n^2d+nd^2)$. This indicates the lower time complexity of FGO compared with the graph convolution. Accordingly, FourierGNN ($K$-order) has the time complexity of $\mathcal{O}(nd\operatorname{log}n+Knd^2)$, compared with the time complexity of $K$-layer GCNs $\mathcal{O}(Kn^2d+Knd^2)$. In addition, the parameter volumes of FGO and FourierGNN are $\mathcal{O}(d^2)$ and $\mathcal{O}(Kd^2)$ respectively, which are the same as those of the graph convolution and GCNs. We summarize the parameter volumes and time complexity in **Table 5 in the attached PDF**. **W2.** While FourierGNN does not achieve the best MAPE results on all datasets, it consistently ranks within the top-3 among all 14 comparative methods and demonstrates comparable performance to state-of-the-art MAPE results. Furthermore, FourierGNN outperforms all the baselines in terms of MAE and RMSE, even though it may not achieve the best MAPE scores. This is an acceptable outcome because the best baseline on different datasets does not necessarily guarantee the best performance in terms of MAE, RMSE, and MAPE. These three metrics reflect the accuracy of forecasting from different perspectives. In addition, FourierGNN, along with most of the baseline methods, adopts MSE as the objective function. As a result, the comparative methods tend to prioritize improvements in terms of MSE or RMSE. In summary, FourierGNN achieves the best MAE and RMSE scores while consistently ranks within the top-3 in terms of MAPE scores compared to state-of-the-art baselines. These results indicate the superiority of FourierGNN over other state-of-the-art models for MTS forecasting. **Q1.** Therefore, the outstanding performance of FourierGNN on the COVID-19 dataset can be attributed to its ability to effectively capture and model the complex spatiotemporal dependencies present in the data. The superior performance of FourierGNN on the COVID-19 dataset is reasonable and can be attributed to the significant spatiotemporal dependencies in the dataset. As shown in **Section E.1 Datasets**, the dataset is about COVID-19 hospitalization in the U.S. states of California (CA) from 01/02/2020 to 31/12/2020 provided by the Johns Hopkins University. The COVID-19 dataset exhibits characteristics that align well with the principles of infectious disease transmission across different regions over time. Consequently, the variables in the dataset demonstrate high correlation with each other, and the datapoints within each variable and across different variables exhibit temporal correlation. These characteristics make the COVID-19 dataset particularly suitable for evaluating the effectiveness of our proposed FourierGNN in capturing spatiotemporal dependencies. To interpret/verify the capability of FourierGNN in spatiotemporal modeling for MTS forecasting, we conducted visualizations of the learned adjacency matrix from different perspectives: 1) Temporal Adjacency Matrix: In **Figure 3 of Appendix H.2**, we visualize the temporal adjacency matrix of eight variables. The results clearly demonstrate that FourierGNN learns distinct temporal patterns for each variable (county), indicating that the hypervariate graph can encode rich and discriminative temporal dependencies. 2) Adjacency Matrices in Different Layers: In **Figure 9 of Appendix H.2**, we display the adjacency matrices of variables in different layers of FourierGNN. The visualization reveals that the Fourier Graph Operator (FGO) can adaptively and effectively capture important patterns while removing noise, thereby enabling the learning of a discriminative model. 3) Final Adjacency Matrices for Consecutive Days: In **Figure 10 of Appendix H.2**, we present the final adjacency matrices of variables for four consecutive days. These matrices highlight the time-varying dependencies among variables and showcase the feasibility of FourierGNN in exploiting such dependencies. Overall, these visualization results provide strong support for the superior performance of FourierGNN on the COVID-19 dataset, indicating its effectiveness in learning underlying node correlations and removing redundant correlations on the hypervariate graph. We'll update paper to address the above aspects and hope we have addressed your comments.
Summary: This paper has studied a popular and important problem, i.e., modeling the intricate spatial and temporal dependencies among multivariate time series for accurate forecasting. To overcome the main limitation that existing works always separately model spatial and temporal, this work designs a new hypervariate graph structure to encode the spatiotemporal dynamics unitedly, and proposes a novel FourierGNN to learn the spatiotemporal dependencies. Extensive experiments on seven real-world datasets demonstrate FourierGNN achieves good performances in both accuracy and efficiency compared with state-of-the-art methods. Strengths: This paper presents a novel formulation from a pure graph perspective to model spatiotemporal dependencies for MTS forecasting. The work is interesting and original, which is different from previous GNN-based methods that typically model spatial and temporal dependencies with distinct graph and temporal networks. The novel pure graph modeling is straightforward yet quite meaningful for MTS forecasting, which brings up good inspiring insights. This paper proposes a graph neural network, namely FourierGNN, to cooperate with the pure graph formulation and provides a theoretical guarantee of effectiveness. This paper seems to have solid, extensive and diverse experiments, which are conducted on seven real-world different datasets. The extensive experimental results have clearly demonstrated the obvious performance improvement of FourierGNN in terms of accuracy and efficiency over state-of-the-art MTS forecasting models. Weaknesses: I have two concerns. My first concern is about the hypervariate graph structure. Although the authors stated that all variables at all timestamps can encode high-resolution relationships, it also may introduce some redundant information or unwanted correlations, such as some variables that are far apart in time. How can this be properly handled? Besides, I find that the core operations of FourierGNN are conducted in the Fourier space, so I am curious whether transferring the hypervariate graph structure into the Fourier space help to learn dependencies. In addition, another concern is whether FourierGNN can also be applied to other domains apart from the multivariate forecasting. Maybe it can be extended to other graph tasks. Minor typos: In the caption of Figure 2, Given the hypervariate graph $\mathcal{G}=(X^\mathcal{G}_t, A^\mathcal{G}_t)$ should be $\mathcal{G}_t=(X^\mathcal{G}_t, A^\mathcal{G}_t)$ Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1.Besides MTS forecasting, can FourierGNN be applied to other domains as well? 2.How does the proposed model handle the redundant information in the hypervariate graph structure? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive comments. We would like to respond to your comments as follows. **Q1. How to properly handled redundant or unwanted correlations in the hypervariate graph?** As stated in **Section 4.1 The Pure Graph Formulation**, we propose the hypervariate graph to connect all variables at all timestamps, where we view spatial dynamics and temporal dependencies from a united perspective. It benefits modeling the real-world spatiotemporal inter-dependencies. Specifically, since we do not have the pre-defined graph structure connecting any two variables at any two timestamps, we initialize the hypervariate graph as a fully-connected graph and learn edge weights (i.e., node spatiotemporal inter-dependencies) adaptively to training data. Accordingly, node correlations are adaptively learned by the supervision of MTS forecasting objectives. In other words, although we initially consider connections between all variables at all timestamps, significant/redundant node correlations are remained/reduced during training FourierGNN (i.e., the learning of spatiotemporal dependencies) via adaptively adjusting the correlation values. Empirically, we have visualized the learned adjacency matrices from three different perspectives: 1) temporal adjacency matrix of eight variates (see **Figure 3 in Section 5.4**); 2) adjacency matrices of variables in different layers of FourierGNN (see **Figure 9 in Appendix H.2**); and 3) final adjacency matrices of variables for four consecutive days (see **Figure 10 in Appendix H.2**). From these visualization results, we can obviously observe: 1) FourierGNN learns distinct temporal patterns for each variable (county), indicating that the hypervariate graph can encode rich and discriminative temporal dependencies (corresponding to Figure 3); 2) FGO can adaptively and effectively capture important patterns while removing noises, enabling the learning of a discriminative model (corresponding to Figure 9); and 3) FourierGNN enjoys the feasibility of exploiting the time-varying dependencies among variables (corresponding to Figure 10). Furthermore, we have conducted a visualization analysis to evaluate the effectiveness of the learned correlations in accordance with the real-world road map on the METR-LA dataset. The results provide strong evidence that the learned hypervariate graph structure can represent highly interpretable correlations, confirming the ability of FourierGNN to capture meaningful and relevant relationships among nodes. Please refer to **Figure 4 in Section 5.4** for visual representation. These empirical findings support the notion that FourierGNN is capable of effectively learning underlying node correlations and removing redundant correlations based on the hypervariate graph. **Q2. Whether transferring the hypervariate graph structure into the Fourier space help to learn dependencies.** Yes, we transfer the hypervariate graph structure into the Fourier space, which helps learn spatiotemporal dependencies from two perspectives: 1) **Effectiveness**: According to the convolution theorem (see **Appendix B**), the Fourier transform of a convolution of two sequences equals the pointwise product of their Fourier transforms. The theorem demonstrates that our proposed Fourier Graph Operator (FGO) is equivalent to the graph convolution operation in the time domain as shown in **Equations 4 and 5**, which provides a theoretical basis for FGO's ability to capture spatiotemporal dependencies on the hypervariate graph. See **Explanations and Proofs in Appendix C** for more details. In addition, Fourier transform offers a global view of the data, which may benefit capturing the global characteristics of the whole sequence [20]. The advantage helps FGO to effectively capture and model the global spatiotemporal dependencies in the hypervariate graph. 2) **Efficiency**: As shown in **Definition 1**, the hypervariate graph $\mathcal{G}\_t$ as a fully-connected graph contains $NT$ nodes, and its corresponding adjacency matrix is ${A}^{\mathcal{G}}\_t \in \mathbb{R}^{NT \times NT}$. It is extremely time-consuming to perform GCN or GAT on the hypervariate graph since the time complexity of the graph convolution and graph attention is quadratic to the number of nodes (i.e., $(NT)^2$) and proportional to the number of edges (i.e., $(NT)^2$), respectively. In contrast, the time complexity of our proposed FGO is proportional to $NT\operatorname{log}(NT)$, where the Log-linear $\mathcal{O}(n\operatorname{log}n)$ complexity makes FourierGNN much more efficient. Please refer to **Complexity Analysis in Lines 203-209** for more details. In summary, our proposed FourierGNN transfers the hypervariate graph into the Fourier space, facilitating efficiently learning effective spatiotemporal dependencies on the hypervariate graph. **Q3. "Whether FourierGNN can also be applied to other domains ..."** Yes. In **Section 4.2 FourierGNN**, the paper presents the definition and formulation of the proposed FGO and FourierGNN, which are designed based on a graph. FGO, as well as FourierGNN, can be seen as a form of global convolution or multi-order convolutions, and they are not specifically limited to the time series domain. They can be applied to various other domains where graph-based learning models are applicable. When extending the application of FourierGNN to other domains, it is crucial to ensure that the underlying graph topology in those domains satisfies the conditions of the Green's kernel. **Q4. Minor typos** We will carefully check thoroughly the paper and correct the typos in the final version. **Q5. "... can FourierGNN be applied to other domains as well?"** Please refer to our response in Q3. **Q6. "How does the proposed model handle the redundant information...?"** Please refer to our response in Q1. We will clarify the above in the final version and hope that we have addressed all your concerns.
Summary: The paper addresses the time series forecasting problem. The authors propose a model that represents each scalar observation as a node in a (fully connected) graph, encodes the nodes and finally regresses the future observations on all previous encoded observations (via a fully connected layer). As encoding they propose a discrete fourier transform followed by several node-wise fully connected layers and finally an inverse fourier transform. In experiments on 6 datasets they show improvements over several baselines. Strengths: s1. interesting problem: time series forecasting s2. interesting approach: parametrizing transformations in fourier space s3. results showing improvements over many baselines Weaknesses: w1. a main aspect of the proposed model is not clear. w2. missed recent related work. w3. experimental protocol is unclear and deviates from the literature. w4. for a key component of the proposed model, there is no ablation study demonstrating its impact. Technical Quality: 3 good Clarity: 3 good Questions for Authors: w1. a main aspect of the proposed model is not clear. - the authors describe the fourier graph layer as applying the discrete fourier transform on the graph features X\in\R^{nd\times 1} of a complete graph, explicitly not the graph fourier transform. Which dimension is the time dimension for this fourier transform? The node dimension of X contains nd many entries, one for each time point and channel. While one can compute a fourier transform also over this vector, what is it semantics? How does it capture time varying information? And would this not crucially depend on how on is ordering these nd many entries? w2. missed recent related work. - esp. PatchTST [Nie et al. 2023] and dlinear [Zeng et al. 2023] are well known model outperforming your baselines by a good margin. w3. experimental protocol is unclear and deviates from the literature. - what is the forecasting horizon \tau you are using? - the experiments report only a single number per dataset and error measure, while in the literature usually results for a portfolio of forecasting horizons are reported (e.g., 96, 192, 336, 720, e.g., the Fedformer). Why do you deviate from this standard? This way it also is not possible to compare your results with the results published in those papers. - usually results are also reported for further datasets such as ETTm2, Exchange, Weather and ILI. How does your model compare to those published results? w4. for a key component of the proposed model, there is no ablation study demonstrating its impact. - while there are many papers that simply operate on the fourier spectrum of a time series, one key component of the model of the authors seems to be that after these operations they move back into the time domain by the inverse fourier transform. What is the impact of this back transformation? references: - Nie, Yuqi, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. “A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers.” arXiv, March 5, 2023. https://doi.org/10.48550/arXiv.2211.14730. - Zeng, Ailing, Muxi Chen, Lei Zhang, and Qiang Xu. “Are Transformers Effective for Time Series Forecasting?” In AAAI, 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review. Hope our response can address the misunderstandings or concerns. **w1** 1. The node features of the hypervariate graph are $X \in \mathbb{R}^{n \times d}$, where $n=NT$ is the number of nodes, $d$ is the number of features, $N$ is the number of variables, and $T$ is the input length. In our proposed FourierGNN, we conduct DFT along the spatiotemporal dimension of $n$ (**see lines 227-228**). Note that the Fourier transform allows us to transform data from the time domain to the frequency domain, revealing its frequency spectrum. However, the Fourier transform is not limited to the time domain alone; it can be applied to various types of data beyond time series, including images and other multidimensional data. Accordingly, the Fourier transform is not necessarily performed along the time dimension, for example, one can apply DFT on images to obtain the frequency spectrum features. 2. According to the convolution theorem (see **Appendix B**), the Fourier transform of a convolution of two sequences equals the pointwise product of their Fourier transforms. Therefore, the multiplication between $\mathcal{F}(X)$ and FGO $\mathcal{S}$ can be written as $\mathcal{F}(\sum_{j=1}^n {{X}}[j]\kappa[i-j])=\mathcal{F}((X*\kappa)[i])=\mathcal{F}({X})\mathcal{S}$, corresponding to a graph convolution on the hypervariate graph (see **Equations 4 and 5**). In other words, FGO is equivalent to the graph convolution, and FourierGNN is equivalent to multi-order convolutions on the hypervariate graph (**see Proposition 1**). Mathematically, conducting DFT on the spatiotemporal dimension of $n$ is to purposefully transform the time-consuming graph convolutions on the hypervariable graph to the efficient pointwise multiplication in the Fourier space (**see Complexity Analysis in Section 4.2**). Intuitively, it obtains the global frequency spectrum of the node features of the hypervariate graph, corresponding to the values of all variables at each timestamp, facilitating learning a high-resolution spatiotemporal representation across timestamps and variables (**explanations can be seen in Appendix C.1**). In addition, we have conducted visualization experiments to demonstrate that FourierGNN can learn both discriminative temporal dependencies and highly interpretative spatial dependencies (**Figures 3 and 4 in Section 5.4** and **Figures 9 and 10 in Appendix H Visualizations**). 3. Structure: According to **Definition 1**, the hypervariate graph connects any two variables at any two timestamps. It embodies not only the intra-series temporal dependencies (node connections of each individual variable), time-varying inter-series spatial dependencies (node connections over each single time step), and also the time-varying spatiotemporal dependencies (node connections between different variables at different time steps). More details can be seen in **Appendix C.1**. Methodology: FourierGNN stacking multiple FGOs is equivalent to multi-order graph convolutions, enabling FourierGNN to adaptively capture the abovementioned high-resolution spatiotemporal dependencies, including the time-varying correlations. Empirically: In **Figure 10 in Appendix H.2**, we visualize the learned adjacency matrix of 10 randomly-selected counties over four consecutive days on the COVID-19 dataset. The results reveal clear spatial patterns that exhibit continuous evolution in the temporal dimension, verifying that FourierGNN enjoys the feasibility of exploiting the time-varying dependencies among variables. 4. Our proposed FourierGNN **is not** influenced by the order of $n$ node features. Structurally, since the hypervariate graph is a fully-connected graph, the order of its $n$ nodes is trivial. Mathematically, given data $x[n]$ with $N$ datapoints, the DFT of $x$ is $\mathcal{X}[k]=\sum_{n=0}^{N-1} x[n]\cdot e^{-\frac{i2\pi }{N} kn } $. The frequency spectrum $\mathcal{X}[k]$ is regardless of the order of the datapoints $x[n]$. To verify the claim, we randomly shuffled the order of time series variables in the raw ECG data five times and evaluated our model on each shuffled set of data. The result is reported in **Table 3 in the attached PDF**, which shows that FourierGNN achieves consistent performance on raw data and randomly shuffled data. **w2** Note that FourierGNN represents a **GNN-based** model that incorporates **frequency analysis** and is tailored for short-term multivariate time series forecasting. Accordingly, we chose GNN-based models (AGCRN, StemGNN, MTGNN, GraphWaveNet, TAMP-S2GCNets, DCRNN, and STGCN), frequency-based models (SFM, FEDformer, Autoformer, and CoST), and short-term models (LSTNet, DeepGLO, and TCN), as well as one representative model (Informer) as our baselines. PatchTST (Transformer-based) and DLinear (MLP-based) are the latest representative work for long-term time series forecasting. Considering that they are neither short-term models nor GNN-based/frequency-based models, they were not included in the comparison. **Moreover, PatchTST and DLinear have not compared their performance with GNN-based models, and they have also not conducted experiments according to the short-term settings**. To address your concern, we performed experiments to compare FourierGNN with PatchTST and DLinear on seven real-world datasets for short-term forecasting (the input length and the prediction length are 12). We reported the results in **Table 1 in the attached PDF**. From the results, we can find that FourierGNN outperforms PatchTST and DLinear on all datasets. The results are reasonable because, compared with FourierGNN, DLinear and PatchTST are more effective at handling gradually evolving trends/long-term temporal correlations but less effective at capturing complex/time-varying spatiotemporal dependencies. For W3 and W4, due to space limit, we response them in general response and PDF file. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ndBv, We thank you again for your review and effort. We were kindly wondering if our responses have addressed your concerns. Besides, your feedbacks are really important to us, and we are also looking forward to further discussions with you. Authors --- Rebuttal Comment 1.2: Title: answer to rebuttal Comment: Dear authors, thanks for your extensive answers and additional experiments. My concerns w1 and w4 you resolved. About the other two: w2. PatchTST and dlinear - The numbers you report for e.g. PatchTST deviate from the published numbers, e.g., for dataset weather, horizon 96, MAE - you report 0.034 (table 2 in your rebuttal pdf), - the PatchTST paper reports 0.198. Can you explain these differences? It would be more convincing to reproduce the experimental settings of the baseline papers (and their numbers). w3. experimental protocol unclear and deviates from the literature. 1. for the main experiment in tab. 1: what is \tau? 1 ? 2. what does "follow the experimental settings in short-term forecasting baselines, like LSTNet and StemGNN" mean exactly? - The LSTNet paper reports two different error measures, RSE and corr, so one cannot compare numbers directly. - The StemGNN paper reports RMSE for datasets Solar and Electricity, but they report different numbers than you do (their table 2): - Solar: they report 0.07, you report 0.222 - Electricity: they report 0.06, you report 0.101 Are you using a vastly different split? Or how are these differences explained? 3. Do any of the numbers in your table 1 coincide with some published numbers in the baseline papers? And if so, could you mark them, say with a star? --- Reply to Comment 1.2.1: Title: Thanks for your feedback (1/2) Comment: Dear Reviewer ndBv, We greatly appreciate your feedback, and thanks for your careful readings. We would like to clarify our experimental settings and the two points you mentioned. **About experimental settings** In the literature on short-term forecasting, previous models have employed diversified experimental settings in their experiments. - They use **different normalization methods**. For example, LSTNet normalizes raw data row by row using the maximum absolute value, short for max-abs normalization; GraphWaveNet uses Z-score normalization; StemGNN uses Z-score normalization for some datasets and min-max normalization for other datasets. - They use **different data splitting ratios**. For example, LSTNet uses 6:2:2; DCRNN uses 7:2:1; MTGNN and StemGNN use 6:2:2 for some datasets and 7:2:1 for other datasets. - They use **different prediction lengths**. For example, GraphWaveNet uses \{3,6,12\}; STGCN uses \{3,6,9\}; LSTNet uses \{3,6,12,24\}; AGCRN uses \{12\}; MTGNN uses \{1, 12\}; StemGNN uses different prediction lengths for different datasets, such as \{3,12,28\}. It is important to note that due to the significant variations in experimental settings among different baselines, we did not directly replicate the results reported in the baseline papers for our paper. In our work, 1. We first unify the experimental settings to guarantee a fair and more convincing comparison. Specifically, we - adopt the min-max normalization and the splitting ratio of 7:2:1 for all datasets, - fix the input length 12 and the output length 12 in Table 1, and - set the input length 12 and the output length \{3,6,9,12\} for the multi-step forecasting. An exception is made for COVID-19 where we adopt the ratio of 6:2:2 because the number of samples in COVID-19 is too small (355). Please refer to **Lines 240-244** for more details. 2. We then re-run **all baselines under the above settings on all datasets for both the experiments presented in our paper and those conducted during the rebuttal phase**. Despite the substantial workload and large time consumption, we firmly believe that employing a unified experimental framework and reproducing the results ensures a fairer comparison, ultimately contributing to the advancement of short-term forecasting. We believe our dedicated efforts in conducting these experiments coincide with your rigorous attitude and expectations toward our experimental settings/results. Furthermore, due to the different experimental settings, **the results of the baselines reported in our paper may differ from those reported in their original papers**. This discrepancy is particularly prominent for baselines that utilize different normalization methods, such as LSTnet using max-abs normalization as default, StemGNN using Z-score or min-max normalization, and our settings using min-max normalization. This disparity is expected, as data normalized using different methods can have varying scales, while most multivariate time series (MTS) forecasting methods evaluate results on normalized data.
Summary: In this paper, the authors study a problem in GNN-based multivariate time series (MTS) forecasting, i.e. modeling spatial correlations and temporal dependencies in the same time. In particular, the authors do not follow previous works on regarding the input as T graphs and capturing temporal dependencies between graphs by temporal networks, but propose to formulate T graphs as a hypervariate graph(a pure fully-connected graph with NT nodes). Specifically, to deal with the huge node amount NT in a hypervariate graph, the authors propose the FourierGNN which utilize matrix multiplications in the Fourier space of graphs to decrease the quadratic computational complexity to quasi linear one. Empirical results on seven datasets show the effectiveness of this method. Strengths: 1. The authors provide a novel view of graph-based MTS forecasting called hypervariate graph. 2. The authors propose Fourier Graph Operator and FourierGNN for efficient convolution on hypervariate graph. 3. In the authors’ experiments, FourierGNN obtains superior performance. Weaknesses: 1. The detail of the experiments is not clear. The authors run previous models on different datasets but do not show whether some hyperparameters, such as feature dim, change with the dataset. 2. The authors claim that FourierGNN is efficient for hypervariate graph, but do not provide other networks’ performance such as GCN on hypervariate graph. 3. The models compared with Efficiency Analysis are too old. 4. The authors do not explain the reason for the Green kernel. The Green kernel is a strong assumpition. In line 161, k_{ij} = A_{ij} W and k_{ij} = k_{i-j} results A_{j+k}_{j} equals for any j, which strictly restricts the topology of the hypervariate graph. Besides, there is no visualization of A in each layer to check whether the green kernel assumption is satisfied during training. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What’s the performance of FourierGNN on longer prediction lengths such as {24, 36, 48, 60} in FEDformer? 2. What’s the meaning of “the same sparsity pattern” in line 196? 3. How to determine the value of A_i in equation (7), A_0 is identity and what about A_1, A_2? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please check the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback and valuable suggestions. In the following, we provide a detailed response to address all of your concerns. **W1.** Thanks for your suggestion. We provide more clear details to clarify the experimental settings of baselines in Appendix E.2. In the experiments, we 1) followed the parameter configuration recommended by the original authors of each baseline for the datasets used in both the baseline paper and our paper; and 2) tuned the recommended parameter settings on datasets not used in the baseline paper to guarantee that baselines achieve the best results. Important reproduction details for all baselines are provided in Appendix E.2. For example, the parameter settings of DeepGLO, TAMP-S2GCNets, DCRNN, and STGCN vary with different datasets. **W2.** We have conducted a thorough analysis of the time complexity of our proposed Fourier Graph Operator (FGO) in comparison with graph convolution, as presented in **Lines 203-209** of the paper. To clarify more clearly, we summarize the time complexity and parameter volumes in **Table 5 in the attached PDF**. Furthermore, to address your concern, we conducted additional experiments on METR-LA and ECG datasets to compare the time complexity of FourierGNN with GCN. In the experiments, we replaced FourierGNN with GCN [1] and performed GCN on the hypervariate graph. We compare FourierGNN with two types of GCN: 1) GCN-N performs GCN on the graph of $N$ variable nodes; 2) GCN-NT performs GCN on the hypervariate graph with $NT$ nodes. Since GCN-N, as a typical GCN, requires a pre-defined graph topology, it can not be conducted on ECG dataset because the dataset has no pre-defined graph topology. For GCN-NT, we input the adjacency matrix with values all one as the graph topology. The corresponding results (average the results of five epoch times) in terms of training time costs on two datasets are reported in the following table: | Models | METR-LA (N=207, T=12)| ECG (N=140, T=12) | |:---|:---|:---| | FourierGNN | 99.76 $\pm$ 2.74 s/epoch | 8.98 $\pm$ 0.31 s/epoch | | GCN-N | 213.64 $\pm$ 1.21 s/epoch| --- | | GCN-NT| 1976.33 $\pm$ 6.24 s/epoch| 384.58 $\pm$ 2.86 s/epoch | The table reveals a notable efficiency advantage of FourierGNN, even showcasing superior performance compared to GCN with $N$ nodes. We will add this experiment to the appendix of our final version. [1]. Kipf & Welling, Semi-Supervised Classification with Graph Convolutional Networks, ICLR 2017 **W3.** Since FourierGNN is a frequency-related GNN-based model, we previously chose the GNN-based baselines for efficiency analysis. We will add two **latest frequency-related** baselines, i.e., Autoformer (2021, NeurIPS) and FEDformer (2022, ICML), in the **Efficiency Analysis of Section 5.3**. The new results are presented in **Table 6 in the attached PDF**. **W4.** Note that the hypervariate graph $\mathcal{G}\_t$ is a **fully-connected graph** (i.e., $A=\{1\}^{n\times n}$) as shown in **Definition 1**. Accordingly, we can easily prove the green kernel assumption on the hypervariate graph. Regarding the hypervariable graph: since the underlying topology structure of the $NT$ nodes is generally not known in advance, we initialize the hypervariate graph with a fully-connected graph and subsequently learn the edge weights (i.e., node correlations or spatiotemporal dependencies) on the graph (more explanation can be seen in **Appendix C**). In addition, since the hypervariate graph is fully-connected, we have visualized the learned node correlations (edge weights) in each layer on the hypervariate graph in **Figure 9 in Appendix H.1**. Specifically, we have visualized the learned adjacency matrices corresponding to the original spectrum of the input, as well as the outputs of the first, second, and third layers of FourierGNN in **Figure 9**. These results show that our FGO can adaptively and effectively capture important spatiotemporal correlations while removing noises, enabling the learning of a discriminative model. **Q1.** To address your concern, we performed additional experiments to compare the performance of FourierGNN and FEDformer on longer prediction lengths \{12,24,36,48,60\} with an input length of 24 on METR-LA dataset. The results are as below: | Model| Metric|12 | 24 | 36 | 48 | 60 | |:----|:----|:----|:----|:----|:----|:---| |FEDformer|MAE| 0.108| 0.120| 0.137| 0.148| 0.163| | |RMSE|0.190 | 0.216| 0.231| 0.259| 0.278| |FourierGNN|MAE| 0.087| 0.115| 0.140| 0.155| 0.169| | |RMSE| 0.169| 0.207| 0.230| 0.265| 0.287| The results demonstrate that FourierGNN achieves significantly better performance than FEDformer on lower prediction lengths, such as {12, 24}, but underperforms FEDformer on longer prediction lengths \{36,48,60\}. This is reasonable because FEDformer, being a Transformer-based baseline, is effective in capturing long-range temporal dependencies, while FourierGNN similar with GNN-based models is skilful in capturing local/short-term spatiotemporal dependencies. **Q2.** The same sparsity pattern means that the positions of non-zero elements in two matrices are identical. The two matrices have the same structure of connections between elements, even though the actual values of the non-zero elements might be different. **Q3.** Since the underlying structure of the hypervariate graph is not given, we initialize the hypervariate graph as a fully-connected graph, i.e., $A=\{1\}^{n\times n}$. All adjacency matrices $\{A_i\}_{i=1}^K$ in $K$-order FourierGNN sharing the same sparsity pattern with $A$ are fully-connected. But their edge weights are learned during training our FourierGNN on a specific dataset. This corresponds to learning the spatiotemporal dependencies between nodes. We will clarify the above in the final version, and hope we have addressed all of your comments.
Rebuttal 1: Rebuttal: Dear Reviewers, ACs, and the SAC: We thank all reviewers for their valuable comments. We response to all comments of reviewers; in particular, after carefully considering ndBv's comments, we realize there might be some potential misunderstandings. We've tried our best to clarify these misunderstandings in the specific rebuttal. **Short-term vs long-term forecasting presents significant differences** + Long-term forecasting focuses on a long historical context to efficiently capture long-range dependencies such as periodic patterns and trends. In contrast, short-term forecasting often involves dealing with rapidly changing and dynamic patterns. + In the literature, the benchmark datasets for short-term and long-term forecasting are different. Long-term forecasting involves most datasets containing a few variables, such as Exchange (8 variables), ETTh, ETTm, ILI (7 variables), and Weather (21 variables). In contrast, short-term forecasting datasets contain more variables, for example, the datasets in StemGNN have at least 140 variables except the COVID-19 dataset, and the two datasets in AGCRN have 307 and 170 variables, respectively. + In deep networks, SOTA long-term forecasting models are mainly Transformer-based and MLP-based, while SOTA short-term forecasting models are mainly GNN-based models. This is because GNN-based models are more effective at capturing spatial correlations among variables and require sufficient variables to guarantee an appropriately sized graph, while Transformer-based/MLP models are more efficient at processing very long inputs/predictions. In summary, our FourierGNN aims to learn spatiotemporal dependencies and is designed specifically for short-term forecasting. The experiments are consistent with the literature on short-term forecasting, compared with SOTA GNN-based models, frequency-based models, and short-term models. **W3** 1. We follow the experimental settings in short-term forecasting baselines, like LSTNet and StemGNN, and the forecasting horizon $\tau$ in our experiment is set to 3, 6, 9, 12. 2. Based on the above discussion of the differences between short-term and long-term forecasting, as shown in the literature, it would be generally unfair to compare long-term baselines with short-term baselines, or vice versa. The response to W2 verifies this conclusion that the long-term baselines underperform FourierGNN on short-term forecasting. To further address your concerns, we have evaluated FourierGNN's long-term forecasting performance compared with PatchTST and DLinear. The experimental results are shown in **Table 2 in the attached PDF**. From the results, we can find that FourierGNN performs not very well, which is attributed to the fact that FourierGNN focuses on learning the united spatiotemporal dependencies, while long-term forecasting often focuses on long-term periodic patterns and trends. 3. As discussed above, ETTm2, Exchange, Weather, and ILI are generally used for evaluation in long-term forecasting. **W4** 1. Actually, in the literature, it is quite prevalent to employ a combination of Fourier transform (FT) and inverse Fourier transform (IFT) operations, including StemGNN, FEDformer, Autoformer, CoST, FiLM [1], and FreDo [2]. They transform time series into the frequency domain via FT, then perform calculations in the frequency domain, and finally conduct IFT to transform back the results to the time domain for subsequent operations. - This is because the frequency spectrum in the frequency domain is complex-valued (consisting of a real part and an imaginary part), while data in the time domain is real-valued. It is essential and necessary to perform inverse Fourier transform to transform complex-valued frequency values to real-valued time domain values, other complex-valued outputs can not feed into a traditional (real-valued) neural output layer for real-valued forecasts. - According to the convolution theorem, we obtain that the recursive multiplication of FGOs (containing a pair of FT and IFT) in Fourier space is equivalent to multi-order convolutions (see **Proposition 1**). In other words, multi-order convolutions in the time domain can be efficiently conducted by leveraging FT and IFT (see **Eq. 7**), which is the core idea of this work. - FT and ITF are linear transforms, which do not add or discard any information. As we know, some models, such as BTSF [3] and TF-C [4], incorporate frequency domain and time domain features into enhancing time series representations. Since the frequency features in the frequency domain are complex-valued, to combine them with time domain real-valued features, these models **discard** the phase information and only take the frequency magnitude spectrum as **features**. Since the magnitude spectrum is real-valued, there is no need to perform IFT. They leverage spectrum information as features for **data augmentations**, while in our model, we take advantage of the efficiency of Fourier transform. These are two different paradigms. 2. We further perform an ablation study on METR-LA and ECG datasets to compare FourierGNN with its variants that obtain either the real part or the imaginary part of the frequency spectrum in FourierGNN. In this case, the inverse Fourier transform is removed in FourierGNN's variants. The results are reported in **Table 4 in the attached PDF** which shows the real part is more important than the imaginary part for the performance, and both the real part and the imaginary part are indispensable for FourierGNN. [1]. FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting. NeurIPS 2022 [2]. FreDo: Frequency Domain-based Long-Term Time Series Forecasting. CoRR abs/2205.12301 (2022) [3]. Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion. ICML 2022. [4]. Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency. NeurIPS 2022 Pdf: /pdf/77f4cefff48e2975f2b750042f5ebd18f44741eb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ATTA: Anomaly-aware Test-Time Adaptation for Out-of-Distribution Detection in Segmentation
Accept (poster)
Summary: The manuscript considers dense OOD detection under domain shift. The manuscript shows that the contemporary methods for dense OOD detection experience performance drop under domain shift and propose an adaptation framework to mitigate the issue. The proposed framework has two steps. The first step determines whether domain shift exists and attempts to reduce it by adapting the statistics of batch normalization layers. The second step iteratively adapts network parameters by optimizing outlier-aware self-supervised loss. Outlier identification during the test-time training is done by renormalizing arbitrary OOD scores. The resulting method can accommodate various anomaly detectors and achieves competitive results in considered benchmarks for dense OOD detection with and without domain shift. Strengths: S1. The road driving scenes indeed experience domain shift. E.g. changes in weather conditions and geolocation affect the captured scenes. Hence, the task of dense OOD detection under domain shift makes sense and appears to be novel. S2. The developed method can accommodate multiple methods for dense OOD detection, which advocates for general applicability. S3. The method achieves competitive results in OOD detection in road driving scenarios under considered domain shifts. Weaknesses: W1. Contemporary works for dense predictions in traffic scenes consider four main types of domain shift: geolocation, weather conditions [a], day-to-night [b], and synthetic to real [c]. Yet, the manuscript deals only with dense OOD detection under different geolocation (RoadAnomaly). Moreover, FS Static-C contains blurred and colour-jittered images which may not adequately reflect real-world domain shifts in traffic scenes. Authors should experimentally cover all possible domain shifts in traffic scenes. W2. The proposed framework requires gradient-based optimisation (and hence backpropagation) during the inference. The manuscript should report the time and memory overhead as in [14,21]. Adding considerable computational burden may make the proposed method inapplicable to the considered application. W3. The proposed framework may only work with models which have batchnorm layers (Sec. 3.3). The manuscript should reflect on models which do not use batch norm layers (eg. attention-based architectures [d]). W4. The manuscript misses relevant works [e,f,g]. [a] Christos Sakaridis, Dengxin Dai, Luc Van Gool: Semantic Foggy Scene Understanding with Synthetic Data. Int. J. Comput. Vis. (2018) [b] Christos Sakaridis, Dengxin Dai, Luc Van Gool: Guided Curriculum Model Adaptation and Uncertainty-AwareEvaluation for Semantic Nighttime Image Segmentation. ICCV 2019. [c] Lukas Hoyer, Dengxin Dai, Luc Van Gool: DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation. CVPR 2022. [d] Robin Strudel, Ricardo Garcia Pinel, Ivan Laptev, Cordelia Schmid: Segmenter: Transformer for Semantic Segmentation. ICCV 2021. [e] Shu Kong, Deva Ramanan: OpenGAN: Open-Set Recognition via Open Data Generation. ICCV 2021 [f] Matej Grcic, Petra Bevandic, Sinisa Segvic: Dense Open-set Recognition with Synthetic Outliers Generated by Real NVP. VISAPP 2021. [g] Chen Liang, Wenguan Wang, Jiaxu Miao, Yi Yang: GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models. NeurIPS 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: C1. More detailed Fig. 2 may improve clarity. C2. Since the manuscript introduces a new task it is very important to establish proper evaluation experiments. How does the model perform under different weather and illumination? E.g. when applying transformations similar to [a] which imitate rain/fog. C3. The SMIYC benchmark [3] contains images with different illumination (night) and weather (snow, fog). How does the method perform on this benchmark? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The manuscript did not reflect on the limitations of the model. One possible limitation may be described in W3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty and applicability of our work. We have added results on the SMIYC benchmark, inference time analysis, and a refined Figure 2 in our general response and attached PDF. We appreciate the reviewer for pointing out missing relevant works and will include them in the related work section. We address your other concerns below. ## W1: Different types of domain shift In this work, we focus on the dense OOD detection problem and consider realistic domain shifts likely to exist in this task. We agree that geolocation, weather conditions, and day-to-night shifts are relevant. However, the synthetic-to-real shift is less applicable in our context. Besides, we emphasize that the adopted datasets, namely RoadAnomaly and FS Static-C datasets, do cover various domain adaptations. Specifically, - For the RoadAnomaly dataset, images vary significantly from specialized collections in Cityscapes. Differences include geographical location, weather, lighting (day/night), road conditions, camera settings, etc. Please refer to our reply for Reviewer 7M1E - 2 for the detailed analysis and the submitted pdf for image examples. - FS Static-C is a synthetic dataset comprised of random smog, color shifting, and Gaussian blur. The random smog simulates weather conditions, while the color shifting may represent various lighting scenarios such as dusk or sunset, and the Gaussian blur could signify different camera settings. Although these transformations might not perfectly align with real-world scenarios, they are commonly used in the domain adaptation literature [7,27,45] and reflect the robustness of different models against domain shifts, which is the primal focus of this dataset in our paper. We also performed additional experiments on the SMIYC benchmarks, containing variations in geolocation, weather condition, and night information. Our results significantly improved OOD detection, further validating the effectiveness of our method (See General Response for details). We note that our work is different from those specifically targeting domain adaptation, and we focus more on the pronounced issues within the OOD detection task, such as ensuring the model works both with and without domain shift, without affecting the prediction of OOD scores. ## W2: Memory Overhead Memory consumption does increase temporarily for storing activation memory during loss computation but aligns with direct inference after backpropagation. Table E1 below illustrates that our memory requirement is higher than direct inference but significantly more efficient than Tent. Table E1: Comparison of maximum GPU memory consumption during test time (in MB), using DeepLab v3+ with WideResNet34 on an NVIDIA TITAN Xp, input image size (1024 x 2048). | | Max Memory (MB) | |------------------|-----------------| | Direct Inference | 1170.2 | | ATTA (Ours) | 3388.6 | | Tent | 12796.7 | ## W3: Architectures without BN While we primarily target models with BN, it's worth noting that most modern networks utilize certain types of normalization techniques including Instance Normalization (IN) and Layer Normalization (LN). For the case of IN and LN, we note that they inherently offer more stability across domain shifts, thus saving special adjustments in our first cascaded stage. This is because, unlike BN, they normalize individual samples, making them less sensitive to variations between domains [e1]. We have also empirically tested our anomaly-aware self-training on the mentioned Segmenter backbone [d] which employs layer normalization. Results shown in Table E2 demonstrate the enhanced performance when incorporating our method. Still, we note that, since our goal is to adapt at the test phase without retraining, and that BN is prevalent in most network architectures, it's practical to recognize BN's ubiquity and design accordingly, especially in our task, where many methods [5,21,42] are based on the Deep Lab V3+ structure, which utilizes BN. In summary, our framework is broadly applicable across the vast majority of modern neural network architectures that employ some form of normalization. The test-time adaptation of models without normalization layers remains an open UDA problem, which is out of the scope of this work. Table E2. Results on the RoadAnomaly Dataset with Segmenter. We use the pre-trained model weights on the Cityscapes dataset provided by [d]. | | AUC $\uparrow$ | AP $\uparrow$ | FPR $\downarrow$ | |--------------|-----------|-----------|-----------| | Energy | 95.43 | 75.60 | 19.76 | | +ATTA (Ours) | **96.49** | **84.77** | **18.56** | | Max logit | 94.81 | 71.65 | 20.96 | | +ATTA (Ours) | **96.08** | **81.88** | **19.19** | ## C2: Different weather and illumination We have considered various domain shifts in our dataset, including different weather and illumination conditions (cf. the response to W1), which establishes a proper evaluation protocol. In particular, we have actively considered foggy conditions (cf. Appendix B) and conducted extensive testing of our model under various weather and illumination scenarios (cf. General Response 1). We appreciate the reference to Sakaridis et al. [a]. However, it's not directly applicable to our task, as the foggy Cityscapes dataset lacks the novel class information needed for OOD detection evaluation. Additionally, the provided code to generate fog from [a] seems to require specific disparity and camera information, which may not be accessible in our OOD test dataset. ## Limitation We'd like to kindly note that we discuss some limitations of our method in the conclusion. We have provided a response to your concern in W3, demonstrating that our model can be applied to network architectures without BN. [e1] Shuaicheng Niu et al. Towards Stable Test-time Adaptation in Dynamic Wild World. ICLR, 2023. --- Rebuttal 2: Title: Post rebuttal Comment: The presented response addresses most of my concerns (W2, W3, W4.) and dense OOD detection under domain shift is an attractive research topic. However, the current state of the manuscript does not consider domain shifts in isolation which is common in the field of domain adaptation e.g. [1]. On the contrary, RoadAnomaly mixes images with different domain shifts. The dataset should be sorted according to the domain (and even enlarged) to show more informative results. [1] Christos Sakaridis, Dengxin Dai, Luc Van Gool: ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding. ICCV 2021. I will increase my score to 4, but I still believe the manuscript should undergo a major revision and another round of reviews. --- Rebuttal Comment 2.1: Title: Response to Reviewer XNCt Comment: Thank you for your comment and for recognizing the appeal of dense OOD detection under domain shift. Regarding your concern, we would like to clarify that our primary focus is on enhancing OOD detection across various levels of domain shifts, rather than adhering strictly to traditional domain adaptation. This approach, which includes consideration of datasets with and without significant domain shifts in isolation, emphasizes our model's practical ability to cope with different degrees of shifts. We believe this perspective is more aligned with the essential and real-world concerns of OOD detection. Concerning the isolation of different types of domain shifts, we appreciate the reference to datasets like [1] that categorize shifts. Yet, in current dense OOD detection datasets, isolating specific domain shifts is a practical challenge. This complexity stems from the lack of explicit consideration of domain shifts during dataset construction. For instance, in datasets like Road Anomaly, various shifts may be intermingled within an image—such as different road conditions with diverse weather—making clear division difficult. To provide information on our model's behavior under various domain-shift types, we've included visualizations in the manuscript (Fig. 2) and the Attached PDF (Fig. 2), demonstrating our method's stable performance across different scenarios. In response to your suggestion, we also conduct additional experiments by introducing smog, color shifting, and Gaussian blur individually to the original FS Static dataset. Results, as shown in Table E3, reveal consistent improvements by our method across isolated domain shifts. We share the hope for future datasets with clearly defined domain shifts and novel classes to further study this evolving field of OOD detection. However, such exploration is beyond the scope of this paper, and we look forward to benchmarks that will facilitate research in this vital area. Table E3. We modify the original FS Static dataset by introducing fog, color shifting, and Gaussian blur separately, to analyze model performance on isolated domain shifts. We compare our method with the previous OOD detection method, PEBAL. | | Fog | | | Color| | | Blur | | | |--------------|-------|-------|-------|----------------|-------|-------|---------------|-------|-------| | | AUC $\uparrow$ | AP $\uparrow$ | FPR95 $\downarrow$ | AUC $\uparrow$ | AP $\uparrow$ | FPR95$\downarrow$ | AUC $\uparrow$ | AP $\uparrow$ | FPR95$\downarrow$ | | PEBAL | 48.37 | 1.58 | 91.82 | 98.58 | 81.93 | 6.26 | 99.46 | 89.46 | 2.07 | | PEBAL + Ours | 98.92 | 79.98 | 3.48 | 99.15 | 87.43 | 2.91 | 99.55 | 90.73 | 1.71 |
Summary: This paper proposes ATTA (Anomaly-aware Test-Time Adaptation), which introduces test-time domain adaptation (TTA) for anomaly segmentation. As a result, anomaly segmentation can be performed well even in a harsher environment where domain shift and semantic shift occur simultaneously. To create an environment with domain shift, the authors create the FS Static -C (Corrupted FS Static) dataset, and develope a Selective Test-Time Batch Normalization (SBN) method to propose a new TTA. They also introduce self-supervised learning using GMM. Strengths: 1. The authors show existing methods are vulnerable to domain shift by creating the FS Static -C dataset and experimenting with it. 2. The effectiveness of TTA in the field of anomaly segmentation is demonstrated experimentally (Table 1). 3. ATTA with SBN and self-supervised learning via GMM, shows better performance than TBN or Tent, which were used in the existing vision field. 4. The authors provide a mathematically convincing motivation. Weaknesses: 1. Table 1 shows that ATTA is exposed to the FS Static -C dataset at test time, and the FPR is re-measured for FS Static -C and the original dataset (FS Static). Therefore, the OOD that must be classified should be seen by the model, so the result should be good. As FS Static -C is a variation of FS Static, it can have a significant impact on the performance of the original dataset. In order to make a meaningful comparison, at least other methods such as Meta-OoD should also be exposed to the FS Static -C dataset and then compared. 2. The contribution and effect of ATTA is unclear. Using OOD data to improve detection performance has been proposed in the past (e.g., Outlier Exposure[1], Meta-OoD[2], etc.). ATTA also eventually exposes the model to additional OOD data to improve detection performance. However, it is necessary to add a description of the advantages that ATTA has over existing methods by exposing it to the test time. For example, it can be shown in a graph that the segmentation performance increases as the batch progresses during the test time. It is also necessary to show how much the performance is improved through TTA on other datasets other than FS Static -C. 3. It is unclear whether a fair comparison was made with the OOD data using methods (e.g., Meta-OoD, Synboost, DenseHybrid) in Table 2. Since TTA is able to obtain more information than MSP, Entropy, Mahalanobis and post hocs, ATTA should be superior to these methods that do not use additional OOD data. Therefore, the main competitors of ATTA are methods that utilize additional OOD data in learning (e.g., Meta-OoD, Synboost, DenseHybrid). The authors' method ATTA uses the FS Static -C dataset as an OOD data for training, but the OOD data used for training in the existing methods mentioned above is not specified. Therefore, it is necessary to perform an ablation study to determine whether the superior performance of ATTA is the effect of ATTA, or simply the effect of data augmentation caused by the introduction of the FS Static -C dataset. In addition, ATTA adopted PEBAL as a partner and compared the performance of PEBAL, which was trained on COCO as an additional OOD, with the existing report. PEBAL should also be compared after being exposed to the FS Static -C dataset (excluding ATTA). 4. The metrics used are AP, AUROC, and FPR95. However, except for AP (Average Precision), these metrics are mainly used in OOD classification rather than anomaly segmentation. It is also necessary to compare the sIoU, PPV, and F1 metrics proposed in the benchmark paper [3] mentioned by the authors in related works. 5. As the authors mentioned in the Conclusion, TTA is used, so the learning is performed simultaneously at the test time, so the inference time is slow. It is necessary to check if the overhead is too large by comparing the inference time for each method. In particular, GMM clustering must be performed for each image, and it is necessary to check if the overhead due to this is excessive. [1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. "Deep anomaly detection with outlier exposure." arXiv preprint arXiv:1812.04606 (2018). [2] Chan, Robin, Matthias Rottmann, and Hanno Gottschalk. "Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation." Proceedings of the ieee/cvf international conference on computer vision. 2021. [3] Chan, Robin, et al. "Segmentmeifyoucan: A benchmark for anomaly segmentation." arXiv preprint arXiv:2104.14812 (2021). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Is the FS Static -C dataset is just a data augmentation of the original FS Static dataset? Or is there a significant difference enough to be called another 'dataset'? 2. Is there a performance change when ATTA is trained on the original FS Static dataset rather than the FS Static -C dataset? 3. Is there a performance change when K-means is used instead of GMM clustering to separate inlier and outlier clusters from the OOD score set G for self-supervised learning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. The authors have well summarized the common features of the OOD field that are affected by the performance of the backbone, and the weakness that the inference is slow because learning is performed during the test time when TTA is used. 2. The inference time is expected to be slow, so it is necessary to compare the inference time for each method to check if the overhead is too large. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and detailed feedback. Unfortunately, there appears to be some misunderstanding regarding our method, which we will clarify in the following responses. ## W1: Clarification of the FS Static -C dataset We would like to clarify that our comparison is indeed fair with other OoD methods, as our approach does not expose the model to any additional information than previous methods. Specifically, our algorithm only encounters online unlabeled image data during inference, a procedure that is also followed by other OOD methods, including Meta-OoD. FS Static-C is merely a testing dataset used to examine the performance of existing OOD methods when a domain shift is introduced. Our algorithm does not utilize any additional information beyond the testing images. Any stated impact of FS Static-C on FS Static appears to be a misunderstanding. ## W2: Our Contribution compared to existing OOD methods **Our Contribution:** Our ATTA method leverages online unlabeled test images to enhance OOD detection capacity in potential domain-shift scenarios. This stands apart from previous work in two key aspects: - *We Do Not Rely on Additional Data:* We exclusively utilize online unlabeled test image, which is naturally available during inference, reflecting the characteristics of the test domain. This presents a unique advantage to our method, as methods like Outlier Exposure and Meta-OoD depend on utilizing labeled OOD data during training and are not applicable to our online and unlabeled scenarios. - *We Explicitly Address Domain-Shift:* This is an area often overlooked by existing OOD detection methods. Our experiments (see Table 1) underline the fragility of prior methods when faced with domain shifts and validate the effectiveness of our approach in these situations. **Segmentation performance as batch progresses:** We appreciate the suggestion to showcase how segmentation performance might evolve with more data. However, it's worth noting that our design handles different batches of images individually (cf. Sec 3.2), reflecting the reality that different domains may be represented in each batch. **Performance of TTA on other datasets:** In addition to FS Static-C, we have evaluated both our method and other test time adaptation techniques (e.g., TBN, Tent) across several datasets, including the Road Anomaly dataset, Fishyscapes Static dataset, and Fishyscapes Lost And Found dataset (results detailed in Table 2 and Appendix Table 2). We have further included extra results on the SegmentMeIfYouCan (SMIYC) benchmark, outlined in the General Response. For all these datasets, we see a clear performance increase by combining our algorithm, which indicates its efficacy. ## W3: Fair Comparison **Fair Comparison:** As detailed in our previous response to W1, our method doesn't utilize FS Static-C or any additional OOD data for training. Instead, we only employ the online unlabeled test image that is naturally available during inference. In Table 2, we've maintained a fair comparison by evaluating our approach alongside various OOD methods. This includes those that use additional OOD data during training (e.g., PEBAL), and those that do not (e.g., Energy score or Max Logit), providing a balanced and fair assessment. **General Applicability:** We would like to note that our method is not designed specifically for PEBAL, but is applicable to a wide array of differentiable OOD functions and training strategies (cf. Sec. 3.2). Empirically, we have tested on Max Logit, Energy, PEBAL (as shown in Table 2), as well as Meta-OOD, Entropy, MSP (as detailed in Table B1 in response to reviewer TGMY). These results collectively demonstrate the general applicability of our method. ## W4: Metric We acknowledge the reviewer's suggestion. It is worth noting that the metrics we have employed, namely AP, AUROC, and FPR95, are not only standard in anomaly segmentation literature [42, 21, 14, 10] but also aligned with the main metrics of the Fishyscapes benchmarks [1]. Additionally, we have included experiments on the SegmentMeIfYouCan (SMIYC) benchmark, where we assessed the model's performance using the sIoU, PPV, and F1 metrics. Details of these evaluation results can be found in General Response. ## W5 & Limitation: Time Overhead In our General Response 2, we demonstrate the inference time of our method, concluding that it is only 1.25-2.25 times the pure inference time. This makes our approach much faster than some other OOD detection methods with post hoc operations. Concerning the GMM clustering overhead, it's worth emphasizing that the impact is minimal in our algorithm. This is primarily due to the fact that the score to be fitted is one-dimensional, and we sample 1% of the total data in clustering. Due to the redundancy in pixel-wise output, this operation does not affect the fitting results. Hence, the overhead due to GMM clustering is not excessive for network inference. ## Q1: Is FS-Static-C just a data augmentation? The FS Static-C dataset is utilized to test model performance (as detailed in our previous response), not as 'data augmentation,' which is a technique applied during training. Therefore, the term is not appropriate in this context. ## Q2: Trainning on original FS Static As we explained earlier, we do not use any additional data for training, and FS Static -C serves solely as a test set. Therefore, the considered scenario does not apply. ## Q3: Alternative for GMM clustering K-means clustering is not suitable for our case, as it assigns each point to the nearest centroid, leading to equally sized clusters. Given that outliers are typically much fewer than inliers, this would result in an inaccurate boundary between inliers and outliers, thereby adversely affecting the performance of the self-training model. --- Rebuttal Comment 1.1: Comment: It appears that I initially misunderstood the utilization of the FS Static-C dataset. I commend the authors for providing clarification on their work in the rebuttal. Upon conducting a thorough re-evaluation of both the main manuscript and the authors' rebuttal, I have come to the realization that the authors have diligently tackled the majority of the concerns I had raised. Consequently, I see no justification for maintaining the initial score, and I am inclined to raise it. Nonetheless, I agree that the paper should go through another round of peer review with many changes. Hence, I will refrain from making a significant increase in my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer C29A Comment: Thank you for re-evaluating our paper and for increasing the score. We would like to clarify that the proposed changes, as detailed in our response to Reviewer TGMY, are specific and minimal. These adjustments are designed to address the comments raised without altering the core of our work, and the results are clearly presented in the rebuttal.
Summary: This paper focuses on the challenging task of open-set semantic segmentation (i.e., dense out-of-distribution (OOD) detection) with domain shift. It proposes a dual-level test-time adaptation framework to overcome domain shift and semantic shift simultaneously, which leverages low-level feature statistics to detect whether domain shift exists while identifying pixels with semantic shift by utilizing dense high-level feature maps. Specifically, it designs an anomaly-aware self-training component to address potential domain shift and improve its ability to detect novel classes through re-balanced uncertainty minimization. The proposed framework is demonstrated to obtain consistent performance improvements across various baseline models. Strengths: 1. This paper explores the realistic and challenging task of open-set semantic segmentation in a real-world scenario with domain shift, considering the impact of both domain shift and semantic shift (novel class from unknown domain) comprehensively. 2. The method in this paper seems reasonable and the experimental results prove the significant superiority of the proposed framework on several OOD segmentation benchmarks, regardless of with or without domain shifts. 3. The method and math presentation in this paper is generally clear. Weaknesses: 1. The visualization of the method in Fig.2 appears overly simplistic and fails to highlight the key components of the proposed framework effectively. 2. How to choose the parameters of the GMM in L220? Is the performance sensitive to their variations? The hyperparameter experiment of these parameters would be desirable. 3. Fig.1 intuitively demonstrates a "domain shift" (fog), but through the visual experiment in Fig.3, I can not understand what the specific "domain shift" that exists in the Road Anomaly dataset for the semantic segmentation is. This casts doubt on the practical application of this paper, despite the experimental evidence confirming the effectiveness of the proposed framework. 4. All the formulas in this paper have not been correctly punctuated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Do not apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and recognition of our work. To address your concerns, we have revised Figure 2 to better highlight the key components of our proposed framework and the overview of our methodology. The updated visualization can be found in the attached PDF. Additionally, we acknowledge the issue with punctuation in the formulas and will ensure that it is corrected in the revised version of the paper. We respond to your other concerns in the following. ## 1. Parameters of GMM: >How to choose the parameters of the GMM in L220? Is the performance sensitive to their variations? The hyperparameter experiment of these parameters would be desirable. We apologize for any confusion. The parameters of our two-component GMM, specifically $\pi_1, \pi_2, \mu_1,\mu_2,\sigma_1,\sigma_2$ in Line 220, are not determined by manual selection, but rather estimated by fitting the OOD scores to the GMM (its two-component structure is fixed), as described in line 214. This fitting process is conducted using the Expectation-Maximization algorithm, facilitated by the scikit-learn package [1]. The means for the GMM components are initialized in a manner that avoids trivial issues with multiple peaks in the inlier distribution (detailed in Appendix B), and the algorithm's convergence parameters are kept at their default settings within the library. [1] https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html ## 2. Specific "Domain Shift" in the Road Anomaly Dataset >Fig.1 intuitively demonstrates a "domain shift" (fog), but through the visual experiment in Fig.3, I can not understand what the specific "domain shift" that exists in the Road Anomaly dataset for the semantic segmentation is. This casts doubt on the practical application of this paper, despite the experimental evidence confirming the effectiveness of the proposed framework. The specific 'domain shift' in the Road Anomaly dataset, as compared to the 'Cityscapes' dataset, encompasses adverse road conditions, diverse weather and lighting conditions, and various camera perspectives and conditions. Here, we detail each of these aspects: - **Adverse Road Conditions**: The Cityscapes dataset mainly focuses on urban scenes with well-paved roads and uniform coloration, whereas the Road Anomaly dataset extends to include more diverse locations like villages and mountains. These rural pathways exhibit varied textures and colors due to different materials, wear, soil types, and vegetation. Examples in Fig. 3 illustrate this contrast, with the Visualization of Fishyscapes Lost & Found / Static serving as a reference for typical Cityscapes road conditions. Besides, we see that previous OOD detection methods often mistake these variations for anomalies, while our algorithm reduces such errors, showing better adaptation to the domain shift in road conditions. We refer to Appendix Fig. 2 for more visualization cases on the Road Anomaly dataset, where other examples with various road conditions can be seen. - **Weather and Lighting Variations**: The Road Anomaly dataset encompasses diverse weather conditions not presented in Cityscapes, including snowy, rainy, foggy weather, and nighttime scenes. These variations affect not only the road but also the surrounding areas and the sky, leading to more false positive errors in existing OOD detection models. Examples of these weather-related differences and their effects on OOD detection can be found in the attached PDF. Our method strives to account for these variations, enhancing the model's adaptability to changes in weather and lighting. - **Various Camera Conditions**: Besides differences in content, the Cityscapes and Road Anomaly datasets also diverge in the conditions under which the images were captured. In the Road Anomaly dataset, images may be captured from various angles and locations, such as alongside the road, which differs from the typical road-centered perspective in Cityscapes. This variance can disrupt previously learned biases in road surface predictions. Additionally, some Road Anomaly images demonstrate a focus effect where the background is intentionally blurred, an effect not commonly seen in Cityscapes. This can lead to false positive errors by initial OOD detection models, as shown in Figure 3, illustrating the sensitivity of models to different camera conditions. In summary, the Road Anomaly dataset's construction, with images gathered from various internet sources, reflects a real-world scenario that involves complex domain-level distribution shifts. These shifts present an intriguing problem for existing OOD detection methods. Our research contributes to understanding and addressing these domain shifts within the dataset. Additional experiments in the SegmentMeIfYouCan benchmark, also characterized by domain-shift variations in road surfaces, lighting, and weather conditions, further substantiate the observations. For a detailed presentation of the results, please refer to General Response 1. --- Rebuttal Comment 1.1: Comment: Thanks for the iilustration on the concept of "Domain Shift". I have read the responses and all my concerns are addressed. I will adjust the rating accordingly
Summary: The papers deal with two levels of domain shift in semantic segmentation, namely the domain shift on the semantic pixel level and the domain shift on the image level. The paper argues that the current dense out-of-distribution (OOD) detection methods are particularly vulnerable in the presence of image-level domain shifts and therefore likely to predict wrong OOD scores on the pixel level. Based on this observation, it presents a two-stage test-time adaptation approach. In the first step, it is performed selective test-time batch normalization, forcing more adaptation in the scenarios which are identified to be novel, while n the second step, it is proposed a self-training procedure using an anomaly-aware output map to enhance the dense OOD detection. The approach is evaluated on a realistic domain-shift dataset, as well as, two non-domain-shift datasets where it shows good performance. While the rebuttal addresses several points of the reviews, the changes and additional experiments discussed are several. The paper should go through another round of peer review with many changes. Therefore, I will not increase my score. Strengths: - The analysis of how current OOD detection methods perform on datasets with distribution shifts, as shown in Figure 1 and Table 1, is very valuable and shows the importance of the problem studied throughout the paper. - The idea is easy to follow. - The combination of two steps, first to reduce the domain gap at the image level and second to improve the dense OOD detection performance of the models, is not well explored in the literature. - The results are convincing when compared to the prior work. Weaknesses: - The clarity of the method has place for improvement. For instnace, the notation is one point, e.g. 120 line x^s, x^s not defined, Eq. 1 \mathcal{N} not defined. The method can be understoood in general, it could be further polished to make it easier. There are also minor typos, e.g. line 38, line 245, line 194. Also, KL divergence is not clearly defined in Eq 1. - In Fig. 1: What is shown in (a) and (b)? A detailed description would be helpful. - For comparison with their method, the authors add Tent to PEBAL. An explanation of how this is implemented would be helpful. - Table 1 compares 6 different OOD detection methods on the synthesised corrupted FS Static (FS Static -C) dataset. It shows a performance drop for all OOD detection methods on the FS Static -C dataset, but applying the proposed approach to PEBAL reduces this drop. It would be interesting to see if a similar effect can be achieved for the remaining 5 OOD detection methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - It is not straightforward to implement the paper. There is no discussion about releasing the code. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - Not applicable . Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for insightful feedback and recognition of the novel aspects of our work. We are committed to improving the clarity in our final version, and we intend to release the code after the double-blind stage. Below, we respond to the specific concerns raised: ## W1: Clarity - *Notation:* We thank the reviewer for the suggestion and will ensure to make the notation more clearly explained in the revised manuscript. Specifically, in line 120, $x^s$ and $y^s$ denote the input image and corresponding segmentation label in the training data, respectively. In Eq. 1, $\mathcal{N}$ denotes the Normal distribution, and $KL$divergence is defined in its standard form as $KL(P \| Q)=\sum_{x \in \mathcal{X}} P(x) \log \left(\frac{P(x)}{Q(x)}\right)$. This equation serves to calculate the KL divergence between two normal distributions with different parameters, as defined in the manuscript. - *Typos:* We acknowledge the typos mentioned, specifically in lines 38, 245, and 194, and will ensure they are corrected in the revised manuscript. ## W2: Detailed Description of Figure 1 Components (a) and (b) We provide a more detailed description and results analysis of Figure 1 in the following, which will also be incorporated in the revised manuscript's caption. - In Figure 1 (a), we show the visualization results of the OOD score maps produced by the previous SOTA method (PEBAL) for an original image and its domain-shifted version (corrupted with smog). The first column displays both images, while the second and third columns depict the corresponding OOD score maps produced by PEBAL and their histograms. By comparison, we observe a significant dampening in the OOD prediction score due to the domain shift, as the unknown object outlined in black lines becomes less distinguishable in the OOD score map of the second row, and the separation in the histogram between known and unknown classes diminishes. - In Figure 1(b), we show the qualitative results of different methods tested on the original FS Static dataset and its domain-shift version. We quantify the drop in PEBAL's performance with the added domain shift and the substantial mitigation of this deterioration when combined with our method. We also examine test-time adaptation methods like TBN and Tent, which gain improvement in high-domain-shift datasets at the cost of the performance in the original dataset. Unlike these, our method enhances performance across both non-domain-shift and domain-shift scenarios, reflecting greater robustness to real-world settings where test data can come from seen or unseen domains. ## W3: Implementation details of Tent on PEBAL We use the pretrained model and inference code of PEBAL provided by their official github repository and optimize the parameters of this pretrained model during test time by implementing Tent specifically. As detailed in Appendix B of our manuscript, we closely followed the original Tent paper and its official code to implement this combination. This involves using transductive batch normalization, calculating the entropy loss on the inlier classifier, and updating the affine parameters of all Batch Normalization layers. We also adopt the episodic training manner, a batch size of 1, and Adam optimizer, a setting consistent with both the Tent paper's guidance regarding their segmentation experiment and our own method's configuration. To ensure a fair comparison, we update the model only once for each test image and tune the learning rate on the FS Static-C dataset. These steps allow us to align our experimental setup with that of the Tent method and facilitate a meaningful comparison. ## W4: Our Method Combined with Various OOD Detection Methods on FS Static-C Thank you for the suggestion. As requested, we extended our evaluation to include not only PEBAL but also the remaining five OOD detection methods. The results in Table B1 below demonstrate that our method consistently enhances the robustness of these OOD detection techniques in scenarios with potential domain shifts. Specifically, we achieved significant performance gains on the FS Static-C dataset for all OOD detection methods, with an average increase of AUC of around 20% (cf. row #2), AP of 10% (cf. row #4), and a decrease of FPR95 of nearly 80% (cf. last row). Some results on the FS Static-C dataset are even better than the original performance of the corresponding methods on the non-domain-shift FS-Static dataset, such as MSP, Entropy, and Meta-OOD, as shown in the table. By combining our method with various OOD detection methods, we demonstrate its efficacy and general applicability across different scenarios. Table B1: We display additional results of our method combined with various previously established OOD detection methods on both the FS Static and FS Static-C datasets. | Metric | Dataset | MSP | + Ours | Entropy | + Ours | Max logit | + Ours | Energy | + Ours | Meta-OOD | + Ours | |--------|-------------|-------|-----------|---------|-----------|-----------|-----------|--------|-----------|----------|-----------| | AUC $\uparrow$ | FS-Static | 92.36 | **93.91** | 93.14 | **95.18** | 95.66 | 95.48 | 95.90 | **96.00** | 97.56 | **98.19** | | | FS-Static-C | 70.85 | **92.97** | 71.23 | **94.33** | 74.13 | **94.80** | 74.02 | **95.41** | 78.34 | **98.06** | | AP $\uparrow$ | FS-Static | 19.09 | **26.57** | 26.77 | **39.57** | 38.64 | **41.23** | 41.68 | **41.84** | 72.91 | **83.11** | | | FS-Static-C | 10.52 | **20.81** | 14.32 | **30.78** | 23.60 | **31.13** | 22.36 | **32.13** | 52.31 | **75.75** | | FPR95 $\downarrow$ | FS-Static | 23.99 | **20.80** | 23.31 | **18.98** | 18.26 | 20.89 | 17.78 | **17.63** | 13.57 | **11.63** | | | FS-Static-C | 100.0 | **22.58** | 100.00 | **20.21** | 89.94 | **23.59** | 89.94 | **18.63** | 100.0 | **11.17** | --- Rebuttal 2: Comment: While the rebuttal addresses several points of the reviews, the changes and additional experiments discussed are several. The paper should go through another round of peer review with many changes. Therefore, I will not increase my score. --- Rebuttal Comment 2.1: Title: Response to Reviewer TGMY Comment: Dear Reviewer TGMY, Thank you for your thoughtful feedback. Regarding the extent of the changes, we'd like to emphasize that our rebuttals are targeted responses to the comments raised by reviewers. Rather than altering the main content of our paper, they are intended to complement and reinforce our existing arguments. Therefore, those additional materials will primarily be placed in the appendix and should not affect the paper's main structure. Here is a summary of our additional results and changes: **Summary of the Additional Results:** - **SMIYC Benchmark (General Response 1):** These results enhance our testing on the domain-shift dataset, aligning with existing findings on FS Static -C and the Road Anomaly datasets. - **Lost and Found Dataset / online FS test set (Response to Reviewer Nknk):** These complementary results align with existing findings on the Fishyscapes Lost&Found and Fishyscapes Static offline datasets. - **Combination with Other OOD Detection Methods on FS Static -C (Response to Reviewer TGMY):** The results demonstrate the general applicability of our method which is consistent with our existing findings shown in Table 2. - **Overhead Analysis (General Response 2):** Additional inference time has been discussed as a potential limitation in our conclusion part. While efficiency is not our paper's focus, we have included these results to address concerns. - **Combination with Architectures without BN (Response to Reviewer XNCt):** Our paper primarily focuses on architectures with BN. The experiments with Segmenter (using Layer Norm) demonstrate extendibility to other architectures but are not a central focus of our paper. **Summary of Other Changes:** - **The Detailed Figure 2:** This enhanced version is already provided in the attached PDF. It can directly be replaced with the original Figure 2, with minimal impact on other parts. - **Notation Clarity, Minor Typos, Detailed Caption, Formula Punctuated (Response to Reviewer TGMY, 7M1E):** These are minor points and will not change the overall content. As outlined above, the additional results and changes are designed to support and reinforce our paper without altering its main content. We appreciate your thoughtful insights and remain open to any further suggestions. Sincerely, Authors
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and constructive comments. In the following, we address some shared concerns in this general response and answer each individual question by replying to each reviewer. We also include an additional PDF containing a revised Figure 2, some examples showing the specific "domain shift" of the Road Anomaly datasets, and visualization results on the SMIYC validation set. ## 1. Results on the SMIYC benchmark We thank the reviewers for the suggestion of evaluating our method on the SegmentMeIfYouCan (SMIYC) [1] benchmark, which particularly includes significant domain shifts such as variations in illumination and weather. We submit our outputs to the benchmark test set and present the results in Table G1. Our experiments on the RoadAnomaly21 and RoadObstacle21 datasets, both part of the SMIYC benchmark, demonstrate significant improvements over the previous SOTA method PEBAL [2]. Notably, in the RoadObstacle21 dataset, PEBAL's performance is hampered by a lack of robustness, whereas our method has increased the AUPRC score from 5.0% to 76.5%. This validates our method's adaptability to substantial domain shifts. We also present the visualization results on the validation sets in the attached PDF. Table G1: Results on the SMIYC official test benchmark. The results for our model were obtained by submitting the model outputs to the benchmark organizer, as required. The results for PEBAL were taken from the benchmark's official website. | RoadAnomaly21 | | | | | | |--------------|:--------------:|:------:|:--------:|:----:|:--------:| | Methods | AUPRC $\uparrow$ | FPR95$\downarrow$ | sIoU gt $\uparrow$ | PPV $\uparrow$ | mean F1 $\uparrow$| | PEBAL | 49.1 | 40.8 | 38.9 | 27.2 | 14.5 | | PEBAL + ATTA | **67.0** | **31.6** | **44.6** | **29.6** | **20.6** | | RoadObstacle21 | | | | | | |--------------|:--------------:|:------:|:--------:|:----:|:--------:| | Methods | AUPRC $\uparrow$ | FPR95$\downarrow$ | sIoU gt $\uparrow$ | PPV $\uparrow$ | mean F1 $\uparrow$| | PEBAL | 5.0 | 12.7 | 29.9 | 7.6 | 5.5 | | PEBAL + ATTA | **76.5** | **2.8** | **43.9** | **37.7** | **36.6** | [1] Chan Robin, et al. Segmentmeifyoucan: A benchmark for anomaly segmentation. NeurIPS, 2021. [2] Yu Tian, et al. Pixel-wise energy-biased abstention learning for anomaly segmentation on complex urban driving scenes. ECCV, 2022. ## 2. Inference Time We acknowledge the reviewers' concern about computational efficiency. - In this paper, we address a general dense out-of-distribution detection problem, where our method may be applied to various applications beyond self-driving, and thus real-time performance is not the primary focus. - In response to the reviewers' request, we have evaluated the average inference time for each image (see Table G2), revealing that our method is only 2.25 times slower than direct inference, and faster than another test-time adaptation method, Tent [1], and some ood detection methods with posthoc operations: ODIN [2], Mahalanobis Distance [3] and Synboost [4]. - This efficiency is attributed to our design, which updates only once per image and confines learnable parameters to the classifier block (cf. lines 238, 278 of the manuscript). The latter design enables us to perform backward and the subsequent forward pass only on the classifier block, and is the main reason we achieve much faster inference than Tent. - Furthermore, in scenarios where data from the same domain are known, we can reduce the computation by performing domain-shift detection only once, maintaining the variable for subsequent images. To illustrate this, we show the inference time without domain-shift detection, which further reduces it to 1.25 times the direct inference speed. - Practically, our episodic training model allows for parallel inference across multiple processors. With ongoing hardware advancements, we anticipate a further reduction in the time gap. Table G2: Comparison of inference time (seconds per image). We calculate the complete time from input to the final prediction and/or ood score (See [5] for details). Experiments are conducted on one NVIDIA TITAN Xp device, and results are averaged over all images in the FS Lost & Found validation set, with image size (1024 x 2048). | Methods | Direct Inference | ATTA (Ours) | ATTA (Ours) w/o SBN | Tent | ODIN | SynBoost | Mahalanobis | |:---------:|:----------------:|:-----------:|:-------------------:|:----:|:----:|:--------:|:-----------:| | Time (s) | 1.2 | 2.7 | 1.5 | 5.1 | 9.2 | 3.0 | 224.2 | [1] Dequan Wang, et al. Tent: Fully test-time adaptation by minimization. ICLR, 2020. [2] Shiyu Liang, et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. ICLR, 2018. [3] Kimin Lee, et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. NeurIPS, 2018. [4] Giancarlo Di Biase, et al. Pixel-Wise Anomaly Detection in Complex Driving Scenes. CVPR, 2021. [5] https://deci.ai/blog/measure-inference-time-deep-neural-networks/. Pdf: /pdf/bdb15a63917b5e585e6af244b4918a920be7c814.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The proposed method considers the OOD sample with domain shift and semantic shift. They address the problem of current OOD approaches that often ignore to handling domain shifts. They introduce an anomaly-aware test-time adaptation method that jointly tackles domain and semantic shifts. The experiments on different benchmarks, the proposed method demonstrated significant performance gains on various OOD segmentation benchmarks. They show significantly robust performance on benchmarks with notable domain shifts. Strengths: The paper has well-organized writing and clear motivation for each part of the proposed method. The proposed method is relatively novel and presented clearly. The proposed method achieves SOTA results with large performance gains compared with other SOTA methods. The proposed method first time focuses on handling domain shift OOD detection, which has been overlooked by previous OOD methods. Weaknesses: From Figure 2, readers cannot figure out any details regarding each of the proposed components and the overall framework. I would suggest the authors could include more details in the figure to have more detailed basic information for each of their contributions. Although the authors compare their approach to the Road Anomaly dataset for domain shift effectiveness evaluation. The domain shift is even more significant in SegmentIfYouCan benchmarks. Hence, I would suggest the authors to show more results on the SegmentIfYouCan benchmark and see if the proposed ATTA can improve the results. From what I observed, the performance of previous SOTA such as PEBAL, and Meta-OOD, perform relatively worse on the domain-shifted benchmarks like SegmentIfYouCan. It would be better if the authors can compare their approach to the online Fishyscape testing set. The validation set only contains a few images, which may not be enough to effectively quantify the performances of the method. Computational efficiency is important to perform real-time detection for self-driving systems, could authors present the training and inference time of the proposed approach and compare it with other methods? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The computation efficiency of the proposed method is unknow. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: According to the paper, the model's improvement is less noticeable with weaker backbones. The proposed method will add additional time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our contribution and for your thoughtful and detailed feedback. We have presented our results of SMIYC and analyzed the inference time of our method in the general response. Please find our responses to your other concerns below: ## W1: Detailed Figure 2 - Model Overview We thank the reviewer for the suggestion and have provided a revised Figure 2 in the attached PDF, containing details of our model design. This updated figure provides a more detailed illustration of each of the proposed components and the overall framework. ## W3: Results for online FS Test set We thank the reviewer for the suggestion and have submitted our results to the online Fisyscapes testing server during rebuttal. Unfortunately, due to an unexpected shutdown of the testing server, we are unable to provide the results at this moment. We promise to provide the results when they are available and will include them in our revised paper. To evaluate our method on a larger benchmark, we have tested our model on the Lost And Found dataset [1], which contains 1203 images. As presented in Table 1, our method outperforms the previous OOD detection methods, Max Logit and PEBAL across all metrics, validating our approach's efficacy. We note that since this dataset contains minimal domain shift, our improvement is relatively small compared to the performance gains observed on other datasets exhibiting more pronounced domain shifts. Table A1. Comparison of methods on the Lost And Found dataset. Results were obtained by running the publicly available pre-trained models on our device. | Lost And Found | | | | |----------------|:---------:|:---------:|:---------:| | Methods | AUROC$\uparrow$ | AP$\uparrow$ | FPR95$\downarrow$ | | Max Logit | 92.73 | 53.22 | 52.51 | | Max Logit + ATTA | **93.66** | **57.93** | **47.27** | | PEBAL | 96.88 | 71.21 | 14.63 | | PEBAL + ATTA | **96.95** | **72.39** | **14.55** | [1] Peter Pinggera et al. Lost and found: detecting small road hazards for self-driving vehicles. IROS, 2016. ## W4 & Question: Computational Efficiency We appreciate the reviewer's concern about computational efficiency. - **Training Time:** Our method is designed primarily for test time and can leverage a pretrained model, eliminating additional training. If a pretrained model is not available, we can follow the standard closed-world segmentation training procedure, utilizing the training set. The training time would then depend on the specific segmentation model used, but our method does not add to this time. - **Inference Time:** In General Response 2, we demonstrate the inference time of our method and compare it with other methods. We conclude that our method takes only 1.25-2.25 times the pure inference time and is faster than Tent and some OOD detection methods with posthoc operations, such as ODIN, Synboost, and Mahalanobis Distance. In case the reviewer specifically means the training and inference time within our self-training procedure, we provide a detailed breakdown in Table A2. We note that the inference time to get the final OOD score is only 0.1s since we only need to do the re-calculation for the classification block. Table A2: Detailed time (second per image) for our self-training procedure. The experimental setting is kept the same as in Table G2. The initial forward includes the forward pass of both the feature extractor and classifier. | Components | Initial Forward | Loss and Backward | Classifier Forward | |:----------:|:---------------:|:-----------------:|:------------------:| | Time (s) | 1.2 | 0.2 | 0.1 | --- Rebuttal Comment 1.1: Title: Online Fishyscapes Testing Set Results Comment: Dear Reviewer Nknk, We have now obtained our results for the online Fishyscapes testing set. As presented in Table A3, our method outperforms the previous state-of-the-art, PEBAL. We appreciate your insights and remain open to further suggestions. Table A3: Results on the Fishyscapes online test benchmark. The results for our model were obtained by submitting it to the benchmark organizer. The results for PEBAL were taken from their published paper. | | Online FS Lost & Found | | |--------------|:---------------:|:----:| | | AP$\uparrow$ | FPR$\downarrow$ | | PEBAL | 44.17 | 7.58 | | PEBAL + ATTA (Ours) | 55.94 | 4.66 | | | Online FS Static | | |--------------|-----------|------| | | AP$\uparrow$ | FPR$\downarrow$ | | PEBAL | 92.38 | 1.73 | | PEBAL + ATTA (Ours) | 94.68 | 0.68 |
null
null
null
null
null
null
Differentiable Neuro-Symbolic Reasoning on Large-Scale Knowledge Graphs
Accept (poster)
Summary: This paper integrates rule-based reasoning and knowledge graph (KG) embedding to enable effective and efficient knowledge graph reasoning. The key idea is to use probabilistic soft logic (PSL) to assess the agreement between the inferred triples and weighted rules, based on the embedding representations of entities and relations. The proposed framework DiffLogic uses several mechanisms to accelerate the optimization. First, it selects essential triples from the inferred triples to perform the assessment. Second, it utilizes the sparsity of violated triples to efficiently estimate the gradient of rule weights. Extensive experiments have been conducted, and the results demonstrate that DiffLogic outperforms all state-of-the-art baselines. Strengths: - Originality: It is novel to use probabilistic soft logic (PSL) to perform neuro-symbolic reasoning, which makes the proposed framework DiffLogic differentiable. Two tailored mechanisms have been designed to accelerate the optimization. First, it selects essential triples from the inferred triples to perform the assessment. Second, it utilizes the sparsity of violated triples to efficiently estimate the gradient of rule weights. - Quality: The proposed framework is technically sound. Extensive experiments have been conducted, and the results demonstrate that DiffLogic outperforms state-of-the-art baselines. Details of the experiments have been provided. - Clarity: The motivation and challenges are well described. The main idea of the proposed framework is well explained. The Experiments Section is well structured. - Significance: Neuro-symbolic reasoning is promising since it can potentially combine the advantages of KG embedding and rule-based reasoning. The experimental results show that DiffLogic is effective and efficient. Weaknesses: 1. Some notations are confusing. First, examples should be given to explain I^-_q and I^+_q. Why distinguishing them are important to DiffLogic? Second, we already have h and t to represent head and tail. Why do we still need A and B? Third, it seems like Z() is unnecessary since it is only used in Eq. (4). Fourth, \theta denotes the embedding parameters. Then, what are the meanings of x(\theta) and y(\theta)? Should we use x|\theta and y|\theta? 2. The writing can be improved. First, examples should be given to explain assignments x and y. Are they continuous? Second, it seems like ground formulas are triples, but the name “formula” is confusing. Third, the connection between contribution (1) and contributions (2) (3) (4) is unclear. What is ELBO, and why is it related to DiffLogic? What are the relations between DiffLogic and “efficient grounding technique”? 3. The paper emphasizes that DiffLogic is efficient. The running time results in Table 9 should be moved from Supplementary Material to Main Content. The time complexity of DiffLogic should be given? 4. All equations should be indexed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why distinguishing I^-_q and I^+_q are important to DiffLogic? 2. What are the meanings of x(\theta) and y(\theta)? Should we use x|\theta and y|\theta? 3. What is ELBO, and why is it related to DiffLogic? 4. What is the corresponding mathematical expression for using important ground formulas to perform the assessment? 5. What is the time complexity of DiffLogic in Big O? == I acknowledged I have read the rebuttal and decide to keep my score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have already mentioned the limitation, i.e., “The performance of DiffLogic heavily depends on the quality of the rules”. It is the limitation of all rule-based reasoning studies and neuro-symbolic reasoning studies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. Why distinguishing $I^-_q$ and $I^+_q$ are important to DiffLogic? A. Thank you for your question. The importance of differentiating between the notations $I^-_q$ and $I^+_q$ is twofold: knowledge representation and logical reasoning. 1. Knowledge Representation: As introduced in subsection 2.2 of our paper, a first-order logic is represented as a disjunction of atoms and their negations. $I^-_q$ and $I^+_q$ are index sets containing the indices of atoms that are negated or not, respectively. A rule in this format can also be reorganized as an implication from the premise (negated atoms) to the conclusion (non-negated atoms). Distinguishing between these two notations is crucial as they designate which part forms the premises and which part forms the conclusion. 2. Logical Reasoning: When employing a logic rule for reasoning, it's intrinsic to identify facts that match the premise. Once all facts from the knowledge graph match the premise, the facts in the conclusion part are then inferred. DiffLogic jointly uses a set of weighted rules to perform reasoning on knowledge graphs. During rule grounding, we find paths that match the negated atoms of a rule. During probabilistic logic reasoning, the assignments y and x are encouraged to satisfy more rules so more new facts are inferred from the knowledge graph. Q2. What are the meanings of $x(\theta)$ and $y(\theta)$? Should we use $x_\theta$ and $y_\theta$? A. Thanks for your question. The notations $x(\theta)$ and $y(\theta)$ represent the assignments $x$ and $y$, both parameterized by an embedding $\theta$. In the context of Probabilistic Soft Logic (PSL), the inference task is to infer the assignment of unobserved facts, or $y$, based on the assignments of observed facts, $x$. Under the MLN/PSL framework, the full representation of all assignments is memory-intensive — it requires $O(|\mathcal{E}|^2|\mathcal{R}|)$ parameters to represent all assignments for a Knowledge Graph (KG) with $\mathcal{E}$ entities and $\mathcal{R}$ relations—we optimize this by parameterizing these assignments through the output scores of a KG embedding model. As such, the assignments become $x(\theta)$ and $y(\theta)$. During the inference step, we update the embedding $\theta$, so it's necessary to explicitly represent all assignments as $x(\theta)$ and $y(\theta)$. Conversely, during the weight updating step, the embeddings $\theta$ are fixed. Therefore, we can omit it for concise writing. I hope this clarifies your query about our notation. Q3. What is ELBO, and why is it related to DiffLogic? A. In variational inference, the Evidence Lower BOund (ELBO) is a crucial concept. It's a function of the parameters of the variational distribution. The idea behind variational inference is to approximate the true posterior distribution (which is often intractable in complex models) with a simpler distribution that we can work with more easily. The process aims to minimize the Kullback-Leibler (KL) divergence between the approximated and the true posterior, or alternatively, to maximize the ELBO. In terms of existing Markov Logic Network (MLN)-based neuro-symbolic methods, such as pLogicNet and ExpressGNN, they employ either a KG embedding model or a Graph Neural Network (GNN) to approximate the optimal distribution of assignments x and y. The optimization of this approximation occurs through updating embeddings. Therefore, the embedding updating step is actually a variational inference. Both pLogicNet and ExpressGNN seek to optimize ELBO as their objective to avoid intractable computation, but this also leads to in-direct optimization of the actual objective of MLN. Contrastingly, DiffLogic allows us to directly maximize the posterior (in equation (4)) due to the continuous characteristic of our framework, leading to a more smooth integration of MLN and KG embedding model, and better optimization efficiency. Q4. What is the corresponding mathematical expression for using important ground formulas to perform the assessment? A. Thank you for your question. In this work, we use rule-guided iterative grounding to identify important ground formulas, to facilitate efficient optimization, rather than assessment. Using the important ground formulas for optimization is shown in equation (6). The first term is computing the weighted sum of potential, which is computed on the selected ground formulas. Q5. What is the time complexity of DiffLogic in Big O? A. Thanks for your question. The run-time of DiffLogic consists of two parts, inference (embedding updating) and rule weight updating: The Big O time complexity for inference is $O(n*e/bs)$, where $n$ is the number of triples in the training set, $e$ is the overall number of epochs of embedding learning, and the $bs$ is the number of training data used in each batch. The Big O time complexity for rule weight updating is $O(b_w\*n_m)$, here $b_w$ denotes the batch size we use to estimate the gradient weights (see the equation between line 209 and line 210) in minibatch, and $n_m$ is the sample size we use in Monte Carlo integration. In practice, we exploit the sparsity of violated rules, so computing the terms $\Psi_{q, MB(i)}$ can be reduced to a constant $O(1)$ time complexity. The computation of the expectation term $E_{y_i \mid MB}\left[\Psi_{q, MB(i)}\right]$ can be estimated using Monte Carlo integration, thus its complexity is $O(n_m)$. Therefore, the overall complexity in rule weight updating step is $O(b_w*(n_m+constant))$ = $O(b_w*n_m)$. We also empirically observe from experiments, that the rule weight update is quite efficient and most of the run-time consumption comes from the inference step. Since the inference step has the same big O run-time complexity as the training of pure data-driven KG embedding, DiffLogic can be trained efficiently and can be further accelerated by using a larger batch size on a larger GPU. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks for the authors' effort in providing the rebuttal. This helped clarify my concerns. I am happy to keep my current rating.
Summary: The paper presents a novel approach for the knowledge base completion task by combining embedding-based and rule-based approaches in a neural symbolic framework. The rule-based component uses probabilistic soft logic to encode rule truth values as continuous values. The overall framework follows an EM approach, where rule weight updating and embedding updating are alternately happening. Later, the authors showed better performance compared to baseline models for the task of knowledge base completion. Additionally, the paper conducted analyses on the efficiency of learning rules and their weight, as well as the effectiveness of injecting new/unseen rules into the framework. Strengths: 1. The proposed approach effectively combines the strengths of both embedding-based and rule-based approaches, resulting in a comprehensive framework. The method is well-motivated, and the empirical results of the proposed approach are strong compared to previous baselines. 2. The rule injection analysis demonstrates the efficacy of injecting additional new rules into the framework. This experiment highlights the flexibility and adaptability of the proposed approach. 3. The additional rule violation analysis is strong compared to the base knowledge base completion model RotatE. Weaknesses: 1. There should be an analysis of the weight parameter between the rule module and the embedding model at inference time. Understanding the behavior would enhance understanding of the model's inference process: whether it relies on the rule system or the embedding system. It's also interesting to know whether changing this parameter would result in a big performance difference. 2. Similarly, it would be great to see the rule weight change before and after learning. Evaluating the changes in rule weights during the learning process would provide valuable information about the evolution of the rules and their influence on the final model performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The initialization method for the knowledge graph (KG) embeddings is random, right? Is there a performance change if you try to initialize the embeddings using pre-trained KB models? 2. In the middle of Figure 3, the training MRR and test MRR are almost the same to each. Typically, there is a difference between the two metrics. Is this happening with all the models? Are the train and eval data so similar to each other? Did you try to overfit the training data and see how the test results changed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes, they discussed the limitations of the methods with relying on high quality rules. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your questions. We will first answer your questions, and then address the weakness. Q1. The initialization method for the knowledge graph (KG) embeddings is random, right? Is there a performance change if you try to initialize the embeddings using pre-trained KB models? By default, the KG embeddings are randomly initialized. By changing the initialization method, the final performance does not change as the optimization objective is not changed. However, by initializing the embeddings with pre-trained embeddings (e.g., a RotatE model pre-trained using margin loss and negative sampling), it will take fewer epochs to converge. {\color{red}Q2. In the middle of Figure 3, the training MRR and test MRR are almost the same to each. Typically, there is a difference between the two metrics. Is this happening with all the models? Are the train and eval data so similar to each other? Did you try to overfit the training data and see how the test results changed?} Thank you for your question. Allow us to clarify the confusion. In fact, the two almost overlapped MRR curves in Figure 3 are both training MRR, where one curve is for DiffLogic (red, solid line) and the other is for RotatE (green, dotted line), so the two curves overlap does not mean the training and testing accuracy coincide. The similarities between these MRR evolution curves stem from three main factors: 1) We adopt RotatE as the KG embedding model of DiffLogic. 2) During inference (or embedding updating), the learning objectives of DiffLogic are a composite of the objective of RotatE and the rule-based objective. 3) Both curves represent training MRRs, indicating how well the RotatE model is fitted to the training data. Response to the two weaknesses. Thank you for your kind advice. The rule weights in DiffLogic are dynamically updated during the rule weight learning step, and then the updated rule weights are used to enhance the embedding learning step. We will include more analysis of the rule module in our manuscripts. Thanks for your suggestions! --- Rebuttal Comment 1.1: Title: Thanks for answering my questions! Comment: I read the response and will keep my score.
Summary: This paper introduces differentiable logic approach, DiffLogic, based on the probabilistic soft logic (PSL) representation. Efficient training of DiffLogic is enabled through the introduction of a grounding technique that iteratively identifies important ground formulas required for inference, and additionally develops a fast estimation technique for computing the gradient of rule weights. The authors demonstrate this on a subset of common benchmarking tasks commonly used in the field. Strengths: 1. The paper is well written and clear in its goals. 2. Originality of the work resides in the specific embedding representation, which is interesting and straight forward; relevant derivations are presented clearly in the appendix. Weaknesses: 1. The significance of the work is difficult to assess, as the authors do not compare to many other differentiable nuerual logic approaches that have appeared of the past 5 years, either quantitatively or conceptually? 2. While the title claims to apply to Large-Scale Knowledge Graphs, the authors do not evaluate on one of the most common such datasets, FB15k-237, one of the larger common evaluation datasets used in this field. It is worth note that the CoDEx-L dataset is somewhat comparable, but not as common in the relevant literature. 3. When comparing evaluations on CoDEx-L, the authors do not include TuckER performance; TuckER was one of the baselines in the original CoDEx paper and appears to outperform the method presented in this paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can you explain this method and its performance in the context of recent similar methods? This method seems somewhat complicated to reproduce. Is there a software implementation available? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors could better address the distinction between logical and purely embedding representations, incorporate observations about embedding methods that outperform the differentiable logic approach, and note the significant advantages of a differentiable logic method relative to a pure GNN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your question. We will first answer the question raised by the reviewer which concerns the first weakness, then we will respond to weakness 2 and weakness 3. Q1. Can you explain this method and its performance in the context of recent similar methods? This method seems somewhat complicated to reproduce. Is there a software implementation available? A. Our proposed method, DiffLogic, is a neuro-symbolic approach that combines Markov Logic Networks (MLN) with neural methods. It takes advantage of MLN’s ability to inject knowledge encoded in rules and leverages neural methods’ ability to learn from data. The distinguishing factor of DiffLogic lies in its continuous nature, which brings advantages in terms of optimization efficiency and performance. Recent similar neuro-symbolic methods such as pLogicNet and ExpressGNN also attempt to combine the advantages of MLN and neural methods. Here's how DiffLogic compares to these: 1) Optimization: Both pLogicNet and ExpressGNN use a neural model to approximate the discrete assignments $x$ and $y$ in an MLN. Finding the optimal approximation is computationally costly because it involves variational inference over a large space. These models avoid this computation by optimizing an Evidence Lower Bound (ELBO) instead. On the contrary, DiffLogic, due to its continuous nature, can directly optimize the MLN objective, thereby offering better computational efficiency and better consistency with rules. 2) Experimental setting: ExpressGNN requires querying the test dataset during inference, which limits its ability to generalize. On the contrary, DiffLogic only requires the training and validation set during training. Therefore, once DiffLogic is trained, it can generalize to unseen data without any further training, providing it with a superior generalization ability. Regarding the implementation, we have implemented DiffLogic in Python. We intend to open-source the official code upon acceptance of this paper. Thank you for showing interest! Response to weakness 2. Thank you for your kind suggestion. We didn't include fb15k-237 because the datasets used in our experiments are already large and challenging. Specifically, the number of training triples in Codex-l and YAGO3-10 is 551K and 1.08M, respectively, larger than fb15k-237 which contains 272K training triples. We will add the results for fb15k-237 in our manuscripts, and thanks for your advice. Response to weakness 3. In our original manuscript, we use the negative sampling scheme as uniform negative sampling during our implementation of all KG embedding models for fair comparison. If we use adversarial negative sampling instead, our model's performance on Codex-l achieve higher performance (MRR=0.337 and Hit@10=0.46) than TuckER (MRR=0.309, HIT@10=0.430) and other baselines. We will add the comparision with Tucker in our revision. Thanks for the suggestion. I hope this address your concerns about our experiment results. --- Rebuttal Comment 1.1: Comment: Thank you for your explanation and taking the time to answer my questions. I concur with your response to weakness section item 2, indeed it would be fine to relegate fb15k-237 results to supplementary material. For item 3, your technique's improved performance from (MRR=0.284 and Hit@10=0.412) to (MRR=0.337 and Hit@10=0.46) now outperforms TuckER, but also suggests that the method deserves a more complete answer than you've provided to item 1. Weighing the results, I have revised my assessment. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for your support! We sincerely appreciate the time you committed to providing us with constructive feedback.
Summary: This paper proposed a differentiable framework - DiffLogic. Instead of directly approximating all possible triples, the author design a tailored filter to adaptively select essential triples based on the dynamic rules and weights. The truth scores assessed by KG-embedding are continuous, so the author employ a continuous Markov logic network named probabilistic soft logic (PSL). It employs the truth scores of essential triples to assess the overall agreement among rules, weights, and observed triples. Strengths: (1) This paper develops a unified neuro-symbolic framework — DiffLogic, that combines the advantages of KG-embedding models and rule-based models: efficiency, effectiveness, capability of leveraging prior knowledge and handling uncertainty. (2) This paper enables consistent training of rule-based models and KG-embedding models. By employing PSL, the joint probability of truth scores can be optimized directly rather than optimizing an evidence lower bound (ELBO) instead. (3) This paper proposes an efficient grounding technique that iteratively identifies important ground formulas required for inference, enabling effective and data-efficient optimization. Weaknesses: The most impressing advantage of Neuro-Symbolic methods is they are interpretable. But I do not see that. What is the application of this reasoning method? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. The most impressing advantage of Neuro-Symbolic methods is they are interpretable. But I do not see that. A. The interpretability of neuro-symbolic methods primarily stems from their logic formulas, using rules to infer new facts is interpretable because a rule is an implication from premise to condition. Take for example, the rule $Father(A, B) \wedge Wife(C, A) \Rightarrow Mother(C, B)$. Inferring a new fact using this rule becomes interpretable when we assign roles to individuals such as A=Jack, B=Ross, C=Judy, and have relations “Father(Jack, Ross)” and “Wife(Judy, Jack)”. Consequently, the conclusion “Mother(Judy, Ross)” is obtained in an interpretable manner. In DiffLogic, we utilize the logic rules mined by external rule-mining systems. These rules are employed to infer new facts from existing ones in knowledge graphs. Moreover, our rule weight updating step enables learning the importance scores of each rule, which reflects the accuracy of these rules. Therefore enable the results to be interpreted with rules and also demonstrate the confidence of the inference with a score. During inference in the final step, both embedding scores and rule scores are combined to perform inference. In the scenario where interpretability is needed, DiffLogic can extract ground formulas connected to the inferred facts to interpret the results. Q2. What is the application of this reasoning method? A. DiffLogic can be employed to perform accurate and interpretable knowledge graph reasoning. We can take advantage of its reasoning ability and interpretability for downstream tasks such as: 1. Personalized Recommendation Systems: These systems use knowledge graphs (or user-item interaction graphs) to deliver personalized suggestions relevant to a user's preferences, history and behavior. 2. Question Answering Systems: Digital assistants like Google Assistant, Alexa, utilize knowledge graphs to comprehend and respond accurately to complex questions. 3. Entity Linking and Disambiguation: Recognizing entities within a text and linking them to corresponding entities in the knowledge graph. Further, it helps decide which entity an ambiguous term refers to, given the context. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response to my questions. Most of my concerns are addressed.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a framework called DiffLogic for neuro-symbolic reasoning on knowledge graphs. It balances accuracy and efficiency by selecting essential triples based on dynamic rules and weights. The framework uses a continuous Markov logic network named probabilistic soft logic (PSL) for end-to-end differentiable optimization. Empirical results show that DiffLogic outperforms baselines in both effectiveness and efficiency. Strengths: 1. The idea taking advantage of both MLN and embedding methods makes sense. 2. There is little typo. 3. Several formulas are provided to describe the proposed method. Weaknesses: 1. The authors only classify the KG reasoning methods into two kinds. But there are many methods using graph neural network in recent years for KG reasoning. 2. The EM-algorithm is a common practice in MLN-based methods. It is better for the authors to add some discussion in terms of difference between DiffLogic, pLogicNet and MLN4KB. It seems that MLN4KB is an important and relevant baseline, but there lacks discussion in Section 1, 2, 5. 3. Most of the compared methods are out of date (before 2020, except MLN4KB). Important baselines like RNNLogic[1] and RLogic[2] are missing. ExpressGNN, although cited, is not compared in the experiments. 4. Some of the results in Table 1 are problematic. - For RotatE, the performance on YAGO3-10 is (MRR=0.495, hit@10=0.670) in their appendix. - For DRUM, the performance on WN18RR is (MRR=0.486, hit@10=0.586). I did not check the other values, but considering the results above, the improvement is not significant. Based on my experience, several methods have MRR>0.5 on YAGO3-10 recently. 5. The scalability analysis in Section 4.2 is based on a small data Kinship. I think it should be better to show the running time, memory cost on larger KGs used in this paper to support scalability claims. 6. In Section 4.3, the analysis is based on an unused dataset WN18. It is quite strange that the authors use different sets of datasets in different parts. [1] RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs. ICLR 2021 [2] RLogic: Recursive Logical Rule Learning from Knowledge Graphs. KDD 2022 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can you provide a thorougher literature review? 2. Can you provide a detailed discussion between the MLN-based methods? 3. Can you explain the results in Table 1? 4. Can you explain the usage of datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This paper does not include contents discussing limitations and impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. Can you provide a more thorough literature review? A. Thanks for your suggestion. We will do a more thorough literature review, including recent GNN-based methods. We will also add RNNLogic and RLogic as baseline methods in our experiments. See Q3 for more details. Q2. Can you provide a detailed discussion between the MLN-based methods? A. Here we compare our model (DiffLogic) with MLN4KB and pLogicNet, respectively. DiffLogic leverages logic rules through a continuous and differentiable MLN framework called probabilistic soft logic (PSL), which allows for smooth integration of MLN and KGE models and result in time and space efficient implementation. Both MLN4KB and pLogicNet are built on MLN, their inference procedure is essentially a discrete optimization problem and requires sophisticated approximation to solve it. MLN4KB only uses rules and can not make use of the similarity between entities as KGE models. Though pLogicNet also uses KGE, its overall framework is non-differentiable because it needs to annotate new facts by MLN to train the KGE model. Q3. Can you explain the results in Table 1? A. Yes, we will answer your question from two aspects: 1) reliability of baseline results; 2) comparison with other baselines such as RNNLogic and RLogic. 1) Reliability of baseline results - {weakness 4.1} The unmatched baseline performance (RotatE MRR/hit@10 on YAGO3-10) is due to our choice of a uniform negative sampling scheme. KGE models may employ different negative sampling schemes, e.g., TransE employs a uniform sampling and RotatE employs an adversarial negative sampling. To make a fair comparison, we applied a uniform sampling scheme across all models. Below we attach the results of RotatE and our model under adversarial negative sampling, and argue that the choice of negative sampling schemes does not affect the conclusion in our paper. |CodeX-s MRR|Hit@10|CodeX-m MRR|Hit@10|CodeX-l MRR|Hit@10|WN18RR MRR|Hit@10|YAGO3-10 MRR|Hit@10| |-|-|-|-|-|-|-|-|-|-| |RotatE|0.421|0.634|0.325|0.466|0.319|0.453|0.469|0.566|0.495|0.670| |DiffLogic|0.445|0.662|0.335|0.487|0.326|0.448|0.493|0.585|0.503|0.673| |DiffLogic$^+$|0.458|0.655|0.343|0.495|0.337|0.46|0.50|0.587|0.513|0.674| We can see that the results of RotatE are now matched. Meanwhile, the results for DiffLogic are also improved and still outperform or are comparable to other baselines. Thanks for your careful review, we will clarify this subtle point and include the comparison with adversarial negative sampling in our manuscript. 2) Comparison with other baselines - {weakness 4.2 and 3} We present the results of other baselines in the following, including AIME3, DRUM(t=2/t=3), RNNLogic, and RLogic. Note that these baselines are all rule-learning methods. ||CodeX-s MRR|Hit@10|CodeX-m MRR|Hit@10|CodeX-l MRR|Hit@10|WN18RR MRR|Hit@10|YAGO3-10 MRR|Hit@10| |-|-|-|-|-|-|-|-|-|-|-| |AMIE3|0.195|0.283|0.063|0.095|0.026|0.029|0.36|0.485|0.25|0.343| |DRUM(T=2)|0.290|0.393|NA|NA|NA|NA|0.434|0.565|NA|NA| |DRUM(T=3)|0.342|0.542|NA|NA|NA|NA|0.486|0.586|NA|NA| |RNNLogic$^+$|-|-|-|-|-|-|0.51|0.597|NA|NA| |RLogic$^+$|-|-|-|-|-|-|0.52|0.604|0.53|0.703| Regarding the comparison with rule-learning methods, we want to highlight that our model uses simple rules (rule body length $\le$ 2) extracted by AMIE3, while DRUM(t=3), RNNLogic, and RLogic are advanced rule learning systems and can extract longer high-quality rules. We kindly argue that an immediate comparison of our methodologies with other advanced rule mining systems could lead to an unfair comparison. If more high-quality rules are used in DiffLogic, the performance can be improved.We will include the results in our revision. Regarding ExpressGNN, as we point out in lines 245 -247 of the manuscript, it requires querying test data during training which is non-applicable in our experimental setting, thus we excluded it from our baselines. More details can be found in the OpenReview discussion at https://openreview.net/forum?id=rJg76kStwH. Q4. Can you explain the usage of datasets? A. Yes, we chose the WN18 dataset because it's more challenging and suitable to demonstrate DiffLogic's rule-injection ability. Specifically, injecting rule patterns effectively back into the learned embeddings becomes challenging when the number of rules is small. In this situation, WN18 successfully injects the rule pattern into representations with only 7 rules, even though a significant portion (approximately 36\%) of the original training set has been removed. By comparison, other datasets have more high-scoring rules (confidence score $>$ 0.8) (YAGO3-10, WN18RR, Codex-S, Codex-M, Codex-L have 22, 13, 35, 52, 56 rules, respectively), making them less challenging. Concerns about scalability. For weakness 5, we provide the run-time and memory overhead of the rule grounding process on real-world datasets. The run-time is evaluated 10 times to report mean and std. |Dataset |Grounding run-time(/sec)|Memory overhead(/MB)| |-|-|-| |YAGO3-10|3.20±0.04|262.65| |WN18RR|0.54±0.01|18.73| |CodeX-s|0.03±0.00|2.19| |CodeX-m|0.38±0.01|11.57| |CodeX-l|0.87±0.04|25.58| We hope our response can satisfactorily address your concerns and we would appreciate if you could consider raising the score. --- Rebuttal Comment 1.1: Title: After rebuttal comments Comment: Thanks for your detailed response to my questions. Most of my concerns are addressed and I have updated my rating. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks for the support! We sincerely appreciate the time and effort you have invested in assessing our work, and thank you for providing us with constructive feedback!
null
null
null
null
null
null